Important Announcement
PubHTML5 Scheduled Server Maintenance on (GMT) Sunday, June 26th, 2:00 am - 8:00 am.
PubHTML5 site will be inoperative during the times indicated!

Home Explore Shobit-MCA Sem II- Network Security and Cryptography (1)

Shobit-MCA Sem II- Network Security and Cryptography (1)

Published by Teamlease Edtech Ltd (Amita Chitroda), 2023-05-18 05:49:20

Description: Shobit-MCA Sem II- Network Security and Cryptography (1)

Keywords: Network Security and Cryptography

Search

Read the Text Version

A final installation test is run to make sure that the system still functions as it should. However, security requirements often state that a system should not do something. Genetic Diversity At your local electronics shop you can buy a combination printer–scanner–copier–fax machine. It comes at a good price (compared to costs of buying the four components separately) because there is considerable overlap in implementing the functionality among those four. Moreover, the multifunction device is compact, and you need install only one device on your system, not four. But if any part of it fails, you lose a lot of capabilities all at once. So, the multipurpose machine represents the kinds of trade-offs among functionality, economy, and availability that we make in any system design. An architectural decision about these types of devices is related to the arguments above for modularity, information hiding, and reuse or interchangeability of software components. For these reasons, some people recommend heterogeneity or ―genetic diversity‖ in system architecture: Having many components of a system come from one source or relying on a single component is risky, they say. However, many systems are in fact quite homogeneous in this sense. For reasons of convenience and cost, we often design systems with software or hardware (or both) from a single vendor. For example, in the early days of computing, it was convenient to buy ―bundled‖ hardware and software from a single vendor. There were fewer decisions for the buyer to make, and if something went wrong, only one phone call was required to initiate trouble-shooting and maintenance. Daniel Geer et al. ―Cybersecurity: The Cost of Monopoly ―examined the monoculture of computing dominated by one manufacturer, often characterized by Apple or Google today, Microsoft or IBM yesterday, unknown tomorrow. They looked at the parallel situation in agriculture where an entire crop may be vulnerable to a single pathogen. In computing, the pathogenic equivalent may be malicious code from the Morris worm to the Code Red virus; these ―infections‖ were especially harmful because a significant proportion of the world‘s computers were disabled because they ran versions of the same operating systems (Unix for Morris, Windows for Code Red). 101

4.3 OPERATING SYSTEM SECURITY Computer client and server systems are central components of the IT infrastructure for most organizations. The client systems provide access to organizational data and applications, supported by the servers housing those data and applications. Considering that most large software products will almost likely contain a lot of security flaws, it is currently essential to handle the installation and ongoing operation of these programs in order to maintain suitable levels of security despite the availability of these risks. In some cases, we may be able to leverage systems that have been developed and tested to ensure security by default. In the next chapter, we'll look at a few of these options. Following the general approach, we describe how to provide systems security as a hardening process that encompasses planning, installation, setup, update, and support of the operating system and essential programs in use. We look at this procedure in terms of the operating system, then essential program in general,before getting into some specifics about Linux and Windows systems. We wrap off with a discussion on how to secure virtualized systems, which allow several virtual machines to run on a single physical system. Physical hardware is at the bottom of the stack, followed by the base operating system, which includes privileged kernel code, APIs, and services, and finally user program and utilities in the top layer, as seen in Figure 3.5. This diagram also describes the availability of BIOS and possibly other code that is external to,and largely not visible from, the operating system kernel, but which is utilized when booting the system or to support low-level hardware control. To provide acceptable security services, each one of these layers of code must have suitable strengthening measures in place. And if the lower layers are not properly secured, each layer is subject to attack from below. According to some assessments, a small number of fundamental hardening steps can prevent a significant fraction of the recent attacks. According to the \"Top 35 Mitigation Strategies\" list published by the Australian Defence Signals Directorate (DSD) in 2010, implementing only the top four of these would have avoided more than 70% of the targeted cyber intrusions reviewed by DSD in 2009. These top four measures are: 1. Patch operating systems and applications using auto-update 2. Patch third-party applications 102

3. Restrict admin privileges to users who need them 4. White-list approved applications Fig. 4.8 Operating System Security Layers This section entails all four of these measurements, as well as several others from the DSD list. These procedures are mainly in line with the \"20 Critical Controls\" established by the Department of Homeland Security, the National Security Agency, the Department of Energy, SANS, and others in the United States. System Security Planning The first step in deploying new systems is planning. Careful planning will help ensure that the new system is as secure as possible, and complies with any necessary policies. This planning should be informed by a wider security assessment of the organization, since every organization has distinct security requirements and concerns. The goal of the system installation planning procedure is to tighten safety while lowering expenses. It is significantly more complicated and expensive to \"retro-fit\" security later than it is to plan and supply it during the early deployment phase, according to extensive experience. The security aspects of the system, its applications and data, as well as its users, must be determined throughout this planning process. The choice of suitable software for the operating system and applications is then guided by this information, as well as user configuration and access control parameters. It also directs the choice of additional essential hardening techniques. The strategy must also identify the relevant individuals to configure and maintain the device, as well as the skills and training requirements. 103

This list includes consideration of: • The system's goal, the sort of data it stores, the applications and services it offers, and the security criteria it must meet • The system's user classes, their permissions, and the kinds of knowledge they have accessibility to • How the users are authenticated • How access to the information stored on the system is managed • What access the system has to information stored on other hosts, such as file or database servers, and how this is managed • Who will be in charge of the system and how will it be managed (via local or remote access) • Any other system protective measures that are required, such as host firewalls, anti- virus or other virus protection methods, and logging Operating Systems Hardening The core operating system, on which all other program and services depend, is the first and most important step in protecting a device. A professionally deployed, upgraded, and configured operating system is required for a solid security foundation. Consequently, many operating systems' default settings prioritize ease of use and performance over security. Further, since every organization has its own security needs, the appropriate security profile, and hence configuration, will also differ. What is required for a particular system should be identified during the planning phase, as we have just discussed. While the details of how to secure each specific operating system differ, the broad approach is similar. For most major operating systems, adequate security setup guides and checklists available, and these should be examined, however they should always be guided by the individual demands of each organization and their systems. Automated tools may be offered to help secure the system settings in specific instances. The following basic steps should be used to secure an operating system: • Install and patch the operating system • Harden and configure the operating system to adequately address the identified 104

security needs of the system by: • Removing unwanted services, applications, and protocols • Configuring users, groups, and permissions • Configuring resource controls • Install and configure additional security controls, such as anti-virus, host-based firewalls, and intrusion detection systems (ids), if needed • Test the security of the basic operating system to ensure that the steps taken adequately address its security needs Operating System Installation: Initial Setup and Patching The deployment of the operating system is the first step in ensuring system security. A network- connected, unpatched system is vulnerable to exploit during installation or continuing usage, as we've already mentioned. As a result, it's critical that the system isn't exposed when it's in this vulnerable state. New systems should ideally be built on a secure network. This network may be completely disconnected, with the OS image and all applicable patches delivered to it via removable media like DVDs or USB drives. Given the prevalence of malware that can spread via removable media, caution should be exercised in ensuring that the media utilized here is not compromised. A network with extremely restricted accessibilityto the rest of the Internet could be utilized instead. It should, preferably, have no incoming access and only have outward access to the critical sites required for system installation and patching. In any instance, the system should be fully installed and hardened before being deployed to its designated, more approachable, and thus susceptible, position. Additional software applications should only be installed if they are necessary for the system's function during the initial setup. Shortly, we'll look at why it's important to keep the number of packages on the machine as low as possible. The complete boot process must be protected as well. This may necessitate altering parameters in the BIOS code used when the machine first boots, or specifying a password necessary for changes to the BIOS code. It may also be necessary to restrict the media from which the system can ordinarily boot. This is required to prevent an attacker from altering the boot process in order to install a hidden 105

hypervisor or just booting a system from external media in order to circumvent the regular system access constraints on locally stored data. As we'll see later, a cryptographic file system might be utilized to combat this issue. Additional hardware code must be chosen and installed with caution, as it runs with full kernel level permissions and is frequently given by a third party. Given the high degree of confidence in such driver code, its integrity and source must be thoroughly verified. A malicious driver has the ability to circumvent various security safeguards in order to install malware. Both the Blue Pill demonstration rootkit and the Stuxnet malware did this. Given the ongoing discovery of software and other flaws for widely used operating systems and apps, it's vital to keep the system as up to date as feasible, with all major security updates installed. This does, in fact, cover the top two of the four essential DSD mitigation techniques we mentioned before. Almost all regularly used systems now include programs that can obtain and apply security updates automatically. These tools should be set up and utilized to reduce the amount of time a system is susceptible to flaws for which fixes are available. Automatic updates should not be performed on change-controlled systems since security patches can create instability on rare but substantial instances. As a result, for systems where availability and uptime are critical, all patches should be staged and validated on test systems before being deployed in production. Remove Unnecessary Services, Application, and Protocols Since any of the software packages operating on a system could include software bugs, the risk is obviously decreased if fewer software products are available to run. There is a clear trade- off between usability, which includes supplying any software that may be needed at some point in the future, and security, which includes a desire to limit the quantity of software loaded. The services, applications, and protocols that are required will differ greatly between businesses, and even between systems within a single firm. To improve security, the system design process should determine what is actually necessary for a specific system, so that an appropriate degree of functionality is provided while software that is not needed is eliminated. Most distributed systems are configured by default to prioritize ease of use and functionality over security. 106

The specified defaults should not be utilized during the initial installation; instead, the installation should be adjusted such that only the essential packages are installed. If additional modules are needed in the future, they can be added at that time. Configure Users, Groups, and Authentication Not all consumers that have access to a system will have the same level of access to all of the system's information and resources. Access controls to data and resources are implemented in all modern operating systems. Almost all of them have some kind of discretionary access control. Role-based or required access control measures may also be available in somesystems. The sorts of users on the system, their privileges, the kinds of knowledge they can obtain, and where and how they are created and authorized should all be considered during the system planning process. Some users will have administrative powers; others will be regular users who will share suitable access to files and other information as needed; and there may even be guest identities with extremely limited access. Restriction of elevated rights to only those users who require them is the third of the four primary DSD mitigation techniques. Furthermore, it is extremely desired that such users only use higher privileges when absolutely necessary to complete a task, and instead use system as a regular user. This improves security by providing a smaller window of opportunity for an attacker to exploit the actions of such privileged users. Some operating systems provide special tools or access mechanisms to assist administrative users to elevate their privileges only when necessary, and to appropriately log these actions. One crucial choice is whether persons, groups, and authentication methods will be defined natively on the system or if a centralized authentication server will be used. Whatever option is selected, the system will now be configured with the necessary information. Any default accounts created as part of the system installation should also be protected at this time. Those that aren't needed should be eliminated or at the very least disabled. System accounts that handle the system's services should be disabled from being utilized for interactive logins. Also, any default passwords should be replaced with new ones that provideadequate protection. 107

Any policy that applies to authentication credentials, and especially to password security, is also configured. This includes details of which authentication methods are accepted for different methods of account access. And it includes details of the required length, complexity, and age allowed for passwords. Configure Resource Controls Once the users and their associated groups are defined, appropriate permissions can be set on data and resources to match the specified policy. This may be to limit which users can execute some programs, especially those that modify the system state. Or it may be to limit which users can read or write data in certain directory trees. Many of the security hardening guides provide lists of recommended changes to the default access configuration to improve security. Install Additional Security Controls Enhanced security technologies, such as anti-virus programs, host-based firewalls, IDS or IPS applications, or application white-listing, may be installed and configured to strengthensecurity further. Some of them may be pre-installed with the operating system, but they are not configured or activated by default. Others are purchased and used third-party applications. Given the increasing prevalence of malware, anti-virus software (which, as previously said, covers a wide spectrum of malware kinds) is an essential security element onmany computers. Anti-virus software has traditionally been employed on Windows systems because of their widespread use, making them a prime target for hackers. However, as other platforms, especially smartphones, have grown in popularity, more malware has been developed for them. As a result, anti-virus software should be viewed as part of any system's security profile. Controlling remote network access to system services is another benefit of host-based firewalls, IDS, and IPS software. If remote management to a service isn't necessary but some local access is, such limits can assist protect services from remote attackers. Firewalls are typically set up to restrict access to specific ports or protocols from some or all external systems. Some may also be set up to enable access from or to specified program on the system, limiting the attack surface and preventing an attacker from installing and accessing their own virus. Additional methods, such as traffic monitoring or file error detection, may beincluded in IDS and IPS software to detect and even react to some forms of attacks. 108

White-listing applications is another extra control. This restricts the number of program that can run on the system to those that are explicitly listed. The last of the four primary DSD mitigation measures, such a technology can prevent an attacker from installing and operating their own malware. While this improves security, it works best in an environment where consumers require a predictable range of program. Any change in software usage would necessitate a configuration update, which might lead to higher IT support requirements. Not all organizations or systems will be predictable enough to accommodate this form of control. Test the System Security Security testing is the last step in the process of first safeguarding the base operating system. The purpose is to make sure that the existing security setup stages are followed correctly, as well as to discover any existing weaknesses that need to be addressed. Many security hardening manuals offers appropriate checklists. There are additional program that analyze a system to ensure that it meets basic security criteria, as well as scan for security flaws and poor configuration practices. This should be done once the system has beenhardened, and then repeated on a regular basis as part of the safety maintenance process. Application Security The essential services and apps must then be installed and configured after the underlying operating system has been deployed and properly secured. The processes for this are fairly similar to the ones listed in the preceding section. The concern, like with the basic operating system, is to only put software on the system that is essential to satisfy its required functionalities, in order to decrease the amount of potential vulnerabilities. Remote access or service software is of special significance, as an attacker may be able to use it to obtain remote access to the system. As a result, any such program must be carefully chosen, setup, and upgraded to the latest version available. Each service or program chosen must be installed, then patched to the most recent available secure version for the system. This could come through the operating system distribution's supplementary packages or from a discrete third-party package. Using a separated, secure build network is desirable, just like with the base operating system. 109

Application Configuration Any application specific configuration is then performed. Designing and designating appropriate information storage spaces for the application, as well as making relevant adjustments to the application or service default configuration information, are examples of this. Default data, scripts, or user accounts may be included in some applications or services. These should be evaluated and only kept if necessary, as well as properly secured. Web servers are a well-known example of this, as they frequently offer a lot of example scripts, many of which are known to be vulnerable. These should not be employed in their original form. The access privileges given to the application should be carefully considered during the setup process. Again, remotely accessible services, such as Internet and file transfer services, are of special importance. The server application should not be granted the right to modify files, unless that function is specifically required. A typical setting error with Internet and file transmission servers is that all of the files provided by the service are held by the same \"user\" account as the server. As a result, any attacker who is able to leverage a weakness in the server software or a server-run script may be able to change any of these files. Such type of vulnerable design is apparent evidenced by the enormous number of \"Web defacement\" assaults. Most of the risk from this type of attack is mitigated by guaranteeing that the server can only read, not write, most of the files. Only those files that need to be modified, to store uploaded form data for example, or logging details, should be writeable by the server. Insteadthe files should mostly be owned and modified by the users on the system who are responsible for maintaining the information. Encryption Technology Encryption is a key enabler technique that may be used to protect data while it is in transit and when it is preserved. If the system requires such techniques, they must be installed, and proper cryptographic keys must be established, signed, and protected. If secure data services are given, public and private keys must be produced for each of them, most likely via TLS or IPsec. Then X.509 certificates are created and signed by a suitable certificate authority, linking each service identity with the public key in use. If secure remote access is provided using Secure Shell (SSH), then appropriate server, and possibly client keys, 110

must be created. Cryptographic file systems are another use of encryption. If desired, then these must be created and secured with suitable keys. Security Maintenance The process of ensuring security is ongoing once the system has been properly constructed, secured, and deployed. This is due to the ever-changing environment, which leads to the identification of software threats and, as a result, new threats. New measures in the security maintenance procedure are as follows: • Monitoring and analyzing logging information • Performing regular backups • Recovering from security compromises • Regularly testing system security • Using appropriate software maintenance processes to patch and update all critical software, and to monitor and revise configuration as needed We've already mentioned the importance of setting up automatic patching and updates where possible, or having a process in place to manually test and install patches on configuration- controlled systems, and that the device should be evaluated on a regular basis using checklists or automated tools where feasible. Now we'll look at the crucial logging and backup methods. Logging \"Logging is a foundation of a good security posture,\" says the report. Logging is a reactive measure that can only alert you to problems that have already occurred. Proper logging, on the other hand, ensures that, in the case of a system compromise or collapse, system administrators could more quickly and accurately determine what went wrong, allowing themto focus their repair and rescue operations more efficiently. The trick is to make sure you're capturing the right data in your logs and then being able to monitor and analyze it properly. The system, network, and applications can all generate logging data. Because it is dependent on the server's security needs and information sensitivity, the range of logging information recorded should be defined during the system planning stage. Logging can result in a huge amount of data. It is critical that adequate room be aside for them. To help manage the total size of the logging information, a proper automatic log rotating and 111

archive system ought to be setup. Manual log analysis is time-consuming and ineffective at detecting harmful occurrences. Automated analysis, on the other hand, is favored because it is more effective in detecting anomalous behavior. Data Backup and Archive Another key measure that aids in protecting the integrity of the system and user data is generating regular backups of data on the system. Data loss can occur for a variety of causes, including hardware or software failures, as well as unintentional or malicious corruption. Data retention may also be governed by legal or operational requirements. Backup is the process of generating regular copies of data in order to restore lost or corrupted data in a shortperiod of time, usually a few hours to a few weeks. The process of archiving data over a long period of time, such as months or years, in terms of meeting legal and operational requirements for accessing historical data. These processes are often linked and managed together, although they do address distinct needs. The needs and policy relating to backup and archive should be determined during the system planning stage. Regardless of whether backup copies are retained online or offline, and whether they are securely stored or transmitted to a remote site, are all important factors. The trade-offs include price and ease of implementation versus increased security and resilience against various attacks. The assault on an Australian hosting provider in early 2011 provided a good illustration of the consequences of poor decisions. The attackers wiped out not only the live versions of thousands of customers' websites, but also all of the backup copies stored online. As a result, many clients who had not made their own backup copies of their sites lost all of their data andinformation, resulting in major ramifications for both them and the hosting provider. Many firms that solely kept onsite backups, for example, have lost all of their data as a result of a fire or flooding in their IT center. These dangers must be properly assessed. Linux/Unix Security Having discussed the process of enhancing security in operating systems through careful installation, configuration, and management, we now consider some specific aspects of this process as it relates to Unix and Linux systems. Beyond the general guidance in this section. 112

There are a large range of resources available to assist administrators of these systems, including many texts, for example [NEME10], online resources such as the ―Linux Documentation Project,‖ and specific system hardening guides such as those provided by the ―NSA—Security Configuration Guides.‖ These resources should be used as part of the system security planning process in order to incorporate procedures appropriate to the security requirements identified for the system. Ensuring that system and application code is kept up to date with security patches is a widely recognized and critical control for maintaining security. Modern Unix and Linux distributions typically include tools for automatically downloading and installing software updates, including security updates, which can minimize the time a system is vulnerable to known vulnerabilities for which patches exist. For example, Red Hat, Fedora, and CentOS include up2date or yum; SuSE includes yast; and Debian uses apt-get, though you must run it as a cron job for automatic updates. It is important to configure whichever update tool is provided on the distribution in use, to install at least critical security patches in a timely manner. Alerts should not be executed on change-controlled devices, as previously stated, because they may create instability. All fixes should be validated on test equipment before being deployed to production systems. Application and Service Configuration Configuration of applications and services on Unix and Linux systems is most commonly implemented using separate text files for each application and service. System-wide configuration details are generally located either in the /etc. directory or in the installationtree for a specific application. Where appropriate, individual user configurations that can override the system defaults are located in hidden ―dot‖ files in each user‘s home directory. The name, format, and usage of these files are very much dependent on the particular system version and applications in use. Hence the systems administrators responsible for the secure configuration of such a system must be suitably trained and familiar with them. These files were conventionally changed one at a time with a text editor, with any changes reflecting the impact either when the machine was next restarted or when the appropriate process was received a signal indicating that it must reload its configuration settings. To make management easier for rookie administrators, most modern systems give a GUI interface 113

to these configuration files. Modest locations with a small number of systems may benefit from using such a management. Larger organizations may prefer to use centralized administration, which includes a central library of important configuration files that can be automatically changed and disseminated to the systems they administer. The most significant adjustments to increase system security are to stop non-essential services and programs, particularly remotely accessible services and apps, and to ensure that those that are required are configured properly, following the security requirements recommendations for each. Users, Groups, and Permissions Unix and Linux systems implement discretionary access control to all file system resources. These include not only files and directories but devices, processes, memory, and indeed most system resources. Access is specified as granting read, write, and execute permissions to each of owner, group, and others, for each resource, as shown in Figure 4.6. These are set using the chmod command. Some systems also support extended file attributes with access control lists that provide more flexibility, by specifying these permissions for each entry in a list of users and groups. These extended access rights are typically set and displayed using the getfacl and setfacl commands. These commands can also be used to specify set user or set group permissions on the resource. User accounts and group identity data are generally saved in the /etc./passwd and /etc./group files, although contemporary systems can also import this information from external repositories queried using LDAP or NIS, for example. These sources of data, as well as any associated authentication credentials, are set in the system's PAM (pluggable authentication module) setup, which is typically done using text files in the /etc./pam. d directory. Users must be assigned to related categories, which provide them any required access, in order to divide access to knowledge on the system. The number of groups and their assignments should be chosen during the network security planning process, and then established in the adequate information storage, whether locally via /etc. configuration filesor centrally via a database. Any default or generic users provided with the system should be reviewed at this time and removed if they aren't needed. Other accounts that are needed but aren't linked to a user that needs to log in should have their login capability disabled and any associated passwords 114

or authentication credentials erased. Guides to hardening Unix and Linux systems also often recommend changing the access permissions for critical directories and files, in order to further limit access to them. Programs that set user (setuid) to root or set group (setgid) to a privileged group are key target for attackers. such programs execute with superuser rights, or with access to resources belonging to the privileged group, no matter which user executes them. An adversary could get these elevated privileges by exploiting a software flaw in such a program. This is referred to as a \"local exploit.\" A remote attacker could exploit a security flaw in a network server. A remote attack is what this is called. It is widely agreed that the quantity of setuid root applications should be kept to a minimum. They can't be removed since superuser rights are necessary to enter some system resources. Examples are applications that handle user login and permit network services to attach to privileged ports. Other applications, which were previously setuid root for programmer ease, can now work properly if setgid to an appropriate privileged group with access to a resource. This has affected applications that display system status or deliver mail. Further adjustments, including maybe the removal of some applications that aren't required on a given system, may be recommended by system hardening recommendations. Remote Access Controls Given the risk of remote vulnerabilities, it's critical to restrict access to only the services that are really necessary. Additional defenses may be provided by host-based firewalls or network access control methods. There are various options for this on Unix and Linux platforms. One approach that network servers can employ is the TCP Wrappers library and the tcpd daemon. Tcpd, which waits for connection requests on behalf of light-weight services, can be used to \"wrap\" them. Before receiving a request and calling the server software to handle it, it validates that it is permitted by the defined policy. Rejected inquiries are kept track of. More complex and heavily loaded servers incorporate this functionality into their own connection management code, using the TCP Wrappers library, and the same policy configuration files. These files are /etc./ hosts. Allow and /etc./hosts. Deny, which should be set as policy requires. There are several host firewall programs that may be used. Linux systems primarily now use the iptables program to configure the net filter kernel module. This provides stateful packet filtering, tracking, and alteration features that are comprehensive but difficult. 115

BSD-based systems (including MacOSX) commonly utilize the ipfw software, which has similar capabilities but is less thorough. Most systems come with an administrative tool that allows you to create common settings and choose which services are allowed to access the system. Given the competence and knowledge required to update these configuration files directly, these should be used if there are non-standard demands. Logging and Log Rotation Many software can be set to log at various levels of information, ranging from \"debugging\" (highest level of detail) to \"none.\" Although a midway setting is usually the greatest option, you should not presume that the default mode is always the optimal. In addition, many software packages let you explicitly state whether to write software event data to a dedicated file or to use the syslog facility when composing log data to /dev/log. It's usually preferable for applications to submit their log data to /dev/log if you want to manage system logs in a steady, centralized manner. Note that log rotate can rotation any logs on the machine, whether they were produced by syslogd, Syslog-NG, or specific program. Application Security Using a chroot jail A few network-accessible services don't need full access to the file system; instead, they only need a small number of data files and directories to function. A typical scenario of such an offering is FTP. It allows you to download and upload files from and to a designated directory tree. An intruder could potentially access and compromise data elsewhere if such a server was undermined and had control over the entire system. Such services can be operated in a chroot jail on Unix and Linux systems, which limits the server's perspective of the file system to a certain area ―/‖ to some other directory (e.g., /srv/ ftp/public). To the ―chrooted‖ server, everything in this chroot jail appears to actually be in / (e.g., the ―real‖ directory /srv/ftp/public/etc./myconfigfile appears as /etc./myconfigfile in the chroot jail). Files in directories outside the chroot jail (e.g., /srv/www or /etc.) aren‘t visible or reachable at all. As a result, chrooting can help contain the consequences of a vulnerable or hijacked server. The primary downside of this strategy is the difficulty: a number of files, folders, and machines must be transferred into the chroot jail (including any executable libraries used by the server). Though thorough processes for chrooting many different apps are available, choice making what has to go into the prison for the server to run properly might be difficult. 116

As a result, chrooting can help contain the consequences of a vulnerable or hijacked server. The primary downside of this strategy is the difficulty: a number of files, folders, and machines must be transferred into the chroot jail (including any executable libraries used by the server). Though thorough processes for chrooting many different apps are available, choice making what has to go into the prison for the server to run properly might be difficult. Security Testing The system hardening guides such as those provided by the ―NSA—Security Configuration Guides‖ include security checklists for a number of Unix and Linux distributions that may be followed. A variety of professional and open-source solutions are also available for information system scanning and vulnerability scanning. \"Nessus\" is one of the most well-known. Although certain limited free-use versions are accessible, this was initially an open-source utility that was commercialized in 2005. \"Tripwire\" is a well-known file security checking tool thatkeeps a store of cryptographic hashes of watched files and scans for any changes, whether harmful or due to an erroneously managed update. This again was originally an open-source tool, which now has both commercial and free variants available. The ―Nmap‖ network scanner is another well-known and deployed assessment tool that focuses on identifying and profiling hosts on the target network, and the network services they offer. Windows Security We now consider some specific issues with the secure installation, configuration, and management of Microsoft Windows systems. For many years, these devices have made up a sizable share of all \"general purpose\" system installs. As a result, hackers have attacked them directly, necessitating the implementation of security remedies to address these issues. The process of providing appropriate levels of security still follows the general outline we describe in this chapter. Beyond the general guidance in this section. Again, there are a large range of resources available to assist administrators of these systems, including reports such as [SYMA07], online resources such as the ―Microsoft Security Tools & Checklists,‖ and specific system hardening guides such as those provided by the ―NSA— Security Configuration Guides.‖ 117

Patch Management The ―Windows Update‖ service and the ―Windows Server Update Services‖ assist with the regular maintenance of Microsoft software, and should be configured and used. Many other third-party applications also provide automatic update support, and these should be enabled for selected applications. Users Administration and Access Controls Users and groups in Windows systems are defined with a Security ID (SID). This information may be stored and used locally, on a single system, in the Security Account Manager (SAM). It can also be controlled centrally for a group of domain-joined systems, with data provided over the LDAP protocol by a central Active Directory (AD) server. Domains are used to handle numerous systems in most businesses. Individuals on any system in the domain can besubjected to the same policies enforced by these systems. Windows systems implement discretionary access controls to system resources such as files, shared memory, and named pipes. The access control list contains a number of items that give or restrict access rights to a certain SID, which could be for a single person or a group of users. Integrity controls are also required on Windows Vista and later systems. All objects, such as activities and files, and all users are assigned to one of four levels of integrity: low, medium, high, or system. The mechanism then checks if the subject's authenticity is equal to or greater than the object's level anytime data is entered to an object. This is a version of the Biba Integrity model in action. Windows systems also define privileges, which are system wide and granted to user accounts. The capability to restore the machine (which involves overriding the regular access rules to obtain a complete backup) and the capacity to switch the system time are two instances of privileges. Some permissions are deemed risky because they could be used by an attacker to harm the system. As a result, they must be given with caution. Others are considered benign and may be applied to a large number of or all user accounts. Hardening the system configuration can include limiting the rights and privileges allowed to users and groups on the system, just as it might with any other system. As the access control list gives deny entries greater precedence, you can set an explicit deny permission to prevent unauthorized access to some resource, even if the user is a member of a group that otherwise grants access. 118

When accessing files on a shared resource, a combination of share and NTFS permissions may be used to provide additional security and granularity. For example, you can grant full control to a share, but read-only access to the files within it. If access-based enumeration is enabled on shared resources, it can automatically hide any objects that a user is not permitted to read. This is useful with shared folders containing many users‘ home directories, for example. You should also ensure users with administrative rights only use them when required, and otherwise access the system as a normal user. The User Account Control (UAC) provided in Vista and later systems assists with this requirement. These systems also provide LowPrivilege Service Accounts that may be used for long-lived service processes, such as file, print, and DNS services that do not require elevated privileges. Application and Service Configuration Unlike Unix and Linux systems, most of the configuration information in Windows systems is consolidated in the Registry, which is a database of keys and values that can be searched and interpreted by these systems' applications. Changes to these values can be made within specific applications, setting preferences in the application that are then saved in the registry using the appropriate keys and values. This approach hides the detailed representation from the administrator. Alternately, the \"Registry Editor\" can be used to directly modify the registry keys. This method is better for making large-scale changes, such as those suggested in hardening guides. These modifications might also be saved in a central repository and distributed out anytime a user login into a network domain system. Other Security Controls Due to the prevalence of virus that affects Windows systems, it is critical to deploy and setup appropriate anti-virus, anti-spyware, personal firewall, and other virus and attack identification and control software on such systems. As the high-incidence figures in reports reveal, this is plainly required for network-connected machines. Furthermore, as the Stuxnet assaults of 2010 shown, even isolated systems upgraded through removable media are vulnerable and must be secured as well. Current generation Windows systems include some basic firewall and malware countermeasure capabilities, which should certainly be used at a minimum. Many organizations, though, 119

discover that one or more of the many commercial goods accessible should be used in addition to these. Unwanted connections among anti-virus and othersolutions from different providers are one source of concern. When planning and deploying such items, extreme caution is required to avoid any negative interactions and to ensure that the entire set of goods in use is sustainable. Windows systems also come with a number of cryptographic capabilities that can be employed when needed. These include compatibility for the Encrypting File System (EFS) for encrypting files and folders, as well as BitLocker for full-disk encryption with AES. Security Testing The system hardening guides such as those provided by the ―NSA—Security Configuration Guides‖ also include security checklists for various versions of Windows. There are also a number of commercial and open-source tools available to perform system security scanning and vulnerability testing of Windows systems. The ―Microsoft Baseline Security Analyzer‖ is a simple, free, easy-to-use tool that aims to help small- to medium- sized businesses improve the security of their systems by checking for compliance with Microsoft‘s security recommendations. Larger organizations are likely better served using one of the larger, centralized, commercial security analysis suites available. 4.4 ACCESS CONTROL In a broad sense, all of computer security is concerned with access control. Computer security is defined by RFC 4949 as \"features that execute and guarantee security services in acomputer system, particularly those that guarantee access control service.\" This section concentrates on a more focused and specific type of access control: Access control establishes a security policy that determines who or what (for example, a process) has entryto each particular network resource, as well as the sort of access that is allowed in each circumstance. Authorization: The act of providing a system unit the permission or authorization to utilize a system resource. This method describes who may be trusted for a particular task. 120

Audit: An independent investigation and inspection of system records and actions with the goal of determining the adequacy of system controls, ensuring comply with relevant policy and operational processes, detecting security flaws, and recommending any necessary changes in regulation, policy, or practices. Fig. 4.9 Relationship Among Access Control and Other Security Functions An access control mechanism mediates between a user (or a process executing on behalf of a user) and system resources, such as applications, operating systems, firewalls, routers, files, and storages. The device must first verify the identity of the entity requesting access. The authentication feature usually determines whether a user is permitted to view the system at all. The access control function then decides if the user's specific request for access is allowed. An authorization database is maintained by a security administrator, and it indicates what kind of access to which assets this user has. This database is used by the access control function to decide whether or not to give access. An auditing function maintains track of user use to system resources and keeps track of it. In the simple model of Figure 3.6, the access control function is shown as a single logical module. In practice, a number of features may share the access control mechanism collaboratively. Access control is present in all operating systems in some form or another, and in many cases, it is fairly robust. Add-on security packages can enhance the operating system's 121

inherent access control capabilities. Access control functions are also included in some applications or services, such as a database management system. Access control facilities can also be provided by outside devices such as firewalls. Access Control Policies: An access control policy, which can be stored in an authorization repository, specifies which forms of access are allowed, when, and by whom. The classifications are used to categorize access control policies: Discretionary access control (DAC): Controls access based on the identity of the requestor and on access rules (authorizations) stating what requestors are (or are not) allowed to do. This policy is termed discretionary because an entity might have access rights that permit the entity, by its own volition, to enable another entity to access some resource. Mandatory access control (MAC): Security labels (which reflect how critical or critical computer resources are) are compared to security clearances to control access (which indicate system entities are eligible to access certain resources). This policy is referred to as compulsory because an entity with access to a resource cannot, on its own initiative, allow another entity to acquire that resource. Role-based access control (RBAC): Controls access based on the roles that users have within the system and on rules stating what accesses are allowed to users in given roles. Attribute-based access control (ABAC): Controls access based on attributes of the user, the resource to be accessed, and current environmental conditions. Subjects, Objects, And Access Rights The subject, object, and access right are the three core components of access control. A topic is an entity that has the ability to interact with objects. In practice, the terms \"subject\" and \"process\" are interchangeable. Any user or application that reflects that user or application receives access to an object through a process that serves that user or application. The user's qualities, such as access rights, are taken into account by the process. A topic is usually held responsible for the actions they take, and an audit trail can be used to track the link between a subject and security-related actions taken on an item by the topic. In most basic access control systems, three kinds of subject are defined, each with varying access permissions: 122

• Owner: The owner of a resource, such as a file, is the person who created it. A system administrator may be the owner of system resources. Ownership of project resources may be assigned to a project administrator or leader. • Group: In addition to the rights allocated to an owner, access rights may be extended to a named group of users, with membership in the group sufficing to exercise these access rights. A user can be a member of many groups in most systems. • World: Users who can access the system but are not in the categories owner or team for this asset are given the least amount of access. An asset to which access is restricted is referred to as an object. In general, an object is a physical entity that stores and/or transmits data. Records, blocks, pages, segments, files, file sections, folders, directory trees, mails, communications, and routines are all examples. Bits, bytes, words, processors, communication ports, clocks, and networking devices are all included in some access control systems. The number and types of items secured by an access control system are determined by the environment in which it operates, as well as the desired tradeoff between protection and complexity, processing load, and ease of use. The method a subject can access an object is described by an access right. The following are examples of access rights: • Read: A user can look at data in a system component (e.g., a file, selected records in a file, selected fields within a record, or some combination). The capability to copy or print is included in read access. • Write: The user has the ability to add, alter, or delete data from the system resource (e.g., files, records, programs). Read access is included with write access. • Execute: The user has the ability to run specific program. • Delete: The user has the ability to delete system resources such as files and records. • New Files, Records, or Fields: The user can create new files, records, or fields. • Search: User may list the files in a directory or otherwise discover the directory. 123

Discretionary Access Control As said before, a discretionary access control system is one in which an item may be awarded access privileges that allow it to provide another item access to a resource on its own choice. An access matrix is a broad method to DAC used by an operating system or a database management system. Lampson proposed the access matrix concept, which was further improved by Graham and Denning and Harrison et al. One component of the matrix is made up of identifiable persons who might try to access the resources' data. Individual users or user groups are usually on this list, although access can also be limited for terminals, networking devices, hosts, or applications rather than or in addition to users. The objects that can be accessed are listed in the other dimension. Objects can be individual data fields at the most granular level of granularity. Objects in the matrix can also be more aggregate groupings, such as records, files, or even the entire database. Each item in the matrix denotes a certain subject's access permissions to a specific object. Figure 3.6 a, is a simple example of an access matrix. Thus, user A owns files 1 and 3 and has read and write access rights to those files. User B has read access rights to file 1, and so on. An access matrix is frequently sparse in practice, and it is done in one of two ways. Access control lists (ACLs) can be created by decomposing the matrix into columns (see Figure 3.6 b). An ACL lists users and their permitted access rights for each object. There may be a default, or public, entry in the ACL. This provides a default set of privileges for users who aren't specifically designated as having special rights. If feasible, the default set of permissions should always follow the principle of least permission or read-only access. Individual individuals and groups of users may be included on the list. ACLs are useful for determining which subjects have which access privileges to a certain resource because each ACL includes data specific to that resource. This data structure, on the other hand, is inconvenient for calculating a user's access rights. Capability tickets are obtained by decomposing the data by rows. A capability ticket provides the objects and operations that a specific user is allowed to perform. Each user has a certain quantity of tickets that they can loan or distribute to others. Tickets pose a bigger security risk than access control lists because they can be disseminated throughout the system. The ticket's integrity must be safeguarded and guaranteed (usually by the operating system). The ticket must be unforgeable in particular. One method is to have the operating system keep all tickets 124

on behalf of the users. These tickets would have to be stored in a memory area that users couldn't access. Another option is to make the capability contain an unforgeable token. A huge random password or a cryptographic message authentication code could be used. This value is verified by the relevant resource whenever access is requested. This form of capability ticket is appropriate for use in a distributed environment, when the security of its contents cannot be guaranteed. The convenient and inconvenient aspects of capability tickets are the opposite of those for ACLs. It is easy to determine the set of access rights that a given user has, but more difficult to determine the list of users with specific access rights for a specific resource. [SAND94] presents an access matrix-like data structure which is not sparse, but is more accessible than ACLs or capability lists. One row in an authorization table corresponds to one subject's access privilege to one asset. A capability list is similar to sorting or accessing the table by subject. An ACL is the same as sorting or obtaining the table by object. An authorization table of this type can be simply implemented in a relational database. Fig. 4.10 Example of Access Control StructuresAn Access Control Model Lampson, Graham, and Denning proposed a broad model for DAC in this section. A collection of subjects, a set of objects, and a set of rules that regulate subject access to objectsare assumed 125

in the model. Let us define the security state of a system as the variety of facts that provides the access permissions for each subject with regard to each object at a particularpoint in time. Three requirements can be identified: displaying the protection state, enforcing access rights, and enabling subjects to change the protection state in certain ways. All three conditions are met by the model, which provides a generic, logical representation of a DAC system. Fig. 4.11 Authorization table for files in fig 3.6 To represent the protection state, we extend the universe of objects in the access control matrix to include the following: • Processes: Access rights include the ability to delete a process, stop (block), and wake up a process. • Devices: Access rights include the ability to read/write the device, manage its functioning (for example, a disk seeks), and block and unblock the device for use. • Memory locations or areas: Access rights include the capacity to read protected memory areas, with the default setting being to deny access. 126

• Subjects: Access rights with regard to a subject refer to the capacity to grant or remove that subject's access rights to other objects, as defined later. In Figure is an example. For an access control matrix, A, each entry A [S, X] contains strings, called access attributes, that specify the access rights of subject S to object X. For example, in Figure , S1 may read file F1, because ‗read‘ appears in A [S1, F1]. From a logical or functional point of view, a separate access control module is associated with each type of object (see Figure . The module evaluates each request by a subject to access an object to determine if the access right exists. An access attempt triggers the following steps: 1. A subject S0 issues a request of type a for object X. 2. The request makes the system (the operating system or an access control interface function of some sort) to generate a message of the form (S0, a, X) to the controller for X. 3. The controller interrogates the access matrix A to determine if a is in A [S0, X]. If so, the access is allowed; if not, the access is denied and a protection violation occurs. The violation should trigger a warning and appropriate action. Fig. 4.12 Extended Access Control Matrixes This Figure suggests that every access by a subject to an object is mediated by the controller for that object, and that the controller‘s decision is based on the current contents of the matrix. In addition, specific subjects have the power to update the access matrix in certain ways. The individual elements in the access matrix are viewed as objects, and a request to edit the access matrix is treated as an access to the matrix. An access matrix controller, whichregulates updates 127

to the matrix, is in charge of such accesses. As indicated in Table, the model also includes a set of rules that control changes to the access matrix. The access privileges 'owner' and 'control,' as well as the idea of a copy flag, are introduced for this reason in the following paragraphs. Transferring, issuing, and cancelling access privileges are dealt with under the first three rules. Suppose the entry a* exists in A [S0, X]. This means S0 has access right a to subject X and, because of the presence of the copy flag, can transfer this right, with or without copy flag, to another subject. Rule R1 expresses this capability. If a subject is concerned that the new subject would maliciously transfer the access right to another subject who should not have it, the access right will be transferred without the copy flag. S1 might put 'read' or'read*' in any matrix entry in the F1 column, for example. Rule R2 specifies that if S0 has been designated as the owner of item X, S0 can provide any other subject access to that object. Rule R2 specifies that if S0 has 'owner' access to X, he or she can add any access permission to A [S, X] for any S. Rule R3 allows S0 to remove any appropriate access from any matrix component in a row for which S0 is the subject, as well as any matrix entry in a column for which S0 is the owner of the object. Rule R4 allows a subject to read the matrix component that it possesses. In Table contains the remaining rules for creating and deleting subjects and entities. Rule R5 indicates that every subject can create and possess a new object, as well as grant and remove access to it. The owner of an item can delete it, resultant in the elimination of the appropriate column of the access matrix, according to Rule R6. Rule R7 allows any subject to establish a new subject, which the creator owns and over which the new subject has control. Rule R8 allows the owner of a subject to delete the subject's specified access matrix row and column (if there are subject columns). 128

Fig. 4.13 Access Control System Commands The set of rules in Table is an example of the rule set that could be defined for an access control system. The following are examples of additional or alternative rules that could be included. A transfer-only right could be defined, which results in the transferred right being added to the target subject and deleted from the transferring subject. By preventing the copy flag from accompanying the owner right, the amount of owners of an object or topic could be restricted to one. A hierarchy of topics can be defined by a subject's ability to establish another subject and have 'owner' access rights to that subject. S1 owns S2 and S3, for instance, in Figure 3.7, hence S2 and S3 are S1's subordinates. S1 can issue and delete access permissions to S2 according to the requirements in Table 3.6. As a result, a subject having a portion of its own access permissions can create new subject. This could be useful if a subject is activating a less-than-trustworthy application and does not want that program to be able to pass access permissions to other subjects. Protection Domains The user access matrix concept we've been discussing so far associates a user with a set of capabilities. To correlate skills with protective areas, a more generic and flexible solution is needed. A protective domain is a collection of items with associated access privileges. A protection domain is defined by a row in the access matrix. So far, each row has been associated with a specific person. So, under this limited paradigm, 129

each person has a protection area, and any processes that the user spawns have access privileges determined by the same protective domain . More flexibility is provided by a more generic idea of protective domain. A user, for example, can spawn activities with a portion of the user's access privileges, referred to as a new control area. The process's capability is hampered as a result of this. A server process might utilize this scheme to spawn tasks for different types of users. A user could also define a safety domain for a program that isn't completely trusted, limiting its access to a safe subcategory of the user's access rights. A system and a domain can have a static or dynamic relationship. For instance, a process may run a series of operations, each requiring various access privileges, such as read and write file access. In general, we want to limit the access permissions that each person or program has at any given time; using protection domains is a simple way to accomplish this goal. One form of protection domain has to do with the distinction made in many operating systems, such as UNIX, between user and kernel mode. A user application runs in a user mode, which prevents the user from accessing specific memory locations and prevents the execution of certain instructions. When a data user calls a system procedure, the routine runs in system mode, also known as kernel mode, which allows privileged instructions to be performed and protected memory locations to be accessed. 4.5 FILE PROTECTION Until now, we've looked at methods for securing any item, regardless of its nature or kind. However, some types of protection have their own set of rules. In this section, we'll look into file protection to understand how it works. The samples we've provided are simply representative; they don't cover every type of file protection available on the market. Basic Forms of Protection As previously stated, all multi - user operating systems have to provide some level of protection to prevent one user from accessing or altering the files of another, whether deliberately or mistakenly. The sophistication of these protection measures has increased in tandem with the number of users. 130

All “None Protection By default, files in the earliest IBM Open source operating system were public. A file belongs to another user could be accessed, modified, or deleted by any user. Instead of software or hardware-based protection, the main safeguard was a combination of trust and ignorance. Users could be trusted not to read or edit the files of others, according to system designers, because users would demand a same respect from others. Because a person could only access a file by name, ignorance aided this situation; presumably, people only knew the names of the documents to and that they had authorized access. However, it was accepted that some system files were important and that they might be password protected by the system administrator. This feature could be used by any user, although passwords were seen as the most valuable for securing operating system files. The use of passwords was governed by two ideologies. Passwords were sometimes used to control all accesses (read, write, and delete), providing the system operator complete control over all files. Passwords would govern just write and remove accesses at other times, since only these two acts had an impact on other users. In either situation, the password technique necessitated the intervention of a system operator each time entry to the file was granted. For a variety of reasons, however, an all-or-nothing protection is unacceptable. • There is a lack of trust. The notion that users are trustworthy is not always valid. Mutual respect may sufficient in systems with a small number of users who all knows each other; however, in updated in the system because not every person knows almost every user, there is no foundation for trust. • It's all or nothing. Even if a user identifies a group of trustworthy users, there is no practical way to restrict access to them solely. • Timesharing is becoming more popular. This protection technique is better suited to a batch environment, where users have minimal opportunity to connect with other clients and where users think and explore when they are not testing the system. Users on timesharing systems, on the other hand, engage with one another. Users in a time - sharing operating environment are more likely to arrange computing chores so that results can be passed from one program or one user to another because they pick when to execute program. 131

• Complexity. Operating system performance suffers as a result of the (human) operator intervention necessary for file protection. As a result, save for the most critical datasets, computing centers prohibit this kind of file protection. • File listings. Various system program can generate a list of all documents for books of accounts and to help people understand which files they are accountable for. As a result, users aren't always aware of what files are on the system. Interactive users can go through any files that aren't password-protected. Group Protection Researchers sought a better way to secure files but the all strategy has so many flaws. They concentrated on identifying user groups that shared a common bond. The universe is split into three groups in a typical implementation: the user, a recognized working group affiliated with the customer, and the rest of the customers. We can label these class person, group, and world to keep things simple. This type of security is utilized on several network systems as well as the Unix operating system. All authorized users have been divided into groups. A group might be made up of numerous people working on a project together, a division, a group, or a single person. The need to share is the foundation for group participation. Because the members of the group share a common interest, it is presumed that they have documents to discuss with the others. No user can be a member of more than one group using this method. (Otherwise, a member of both group A and B may send an A document to a member of the B group.) When a user creates a file, he or she specifies access privileges for themselves, other users of the same class, and all other consumers. Usually, access rights consist of a small number of options, such as read, write, execute, and delete. A user can grant read-only accessible to the general world, read and write permission to the group, and all privileges to the user for a specific file. This strategy would be appropriate for a group paper in which different members of the group might edit sections that are being produced within the team. The paper itself should be open for scrutiny but not modification by persons outside the group. The group safety technique has a number of advantages, one of which is its ease of deployment. Two identifiers (typically integers) are used to identify a user: a user ID and a group ID. Whenever a user logs in, the operating system retrieves these identifiers from the file folder entry for each file. 132

As a result, the operating system can quickly determine whether a request for file access is coming of someone whose group Identity matches the group Identity for the file in question. Although this security system addresses some of the flaws of the all-or-nothing approach, it also brings new challenges. • Affiliation with a group A individual user cannot be a member of two groups at the same time. Assume Tom is a member of one group with Ann and another with Bill. Which group(s) is Tom referring to when he says that a file is to be accessible by the group? Assume the group has access to Ann's file; does Bill have access to it? The most straightforward way to overcome these problems is to declare that each user belongs to only one group. (This does not imply that all users are members of the same group.) Multiple personalities. To get around the one-person, one-group limitation, somepeople may create numerous accounts, effectively allowing them to be many users. Since a single individual can only be one user at a time, this flaw in the safety technique leads to other issues. Assume Tom acquires two accounts, establishing Tom1 in a group with Ann and Tom2 in a team with Bill, to show how complications occur. Because Tom1 and Tom2 are not in the same group, any files, software, or aidscreated under Tom1's account can only be made available to Tom2 if they are made public to the entire world. Multiple personalities result in a proliferation of accounts, redundant files, insufficient safety for general- interest files, and user discomfort. • All groups. To avoid the appearance of numerous personas, the system administrator may determine that Tom should have access to all of his files at all times. This strategy places the burden of control on Tom to decide with whom he communicates what information. For example, he may be inGroup1 with Ann and Group2 with Bill. He creates a Group1 file to share with Ann. But if he is active in Group2 the next timehe is logged in, he still sees the Group1 file and may not realize that it is notaccessible to Bill, too. • Limited sharing. Files can be shared only with people in your group or with the rest of the world. Users wish to be able to select sharing partners for each file individually, for example, sharing one file with 10 people and another with twenty. Despite their flaws, the file protection systems we've discussed are pretty simple and easy. The ease with which they can be implemented offers other simple-to-manage approaches for 133

providing finer levels of security while correlating authorization with a single file. Password or Another Token By letting the user to give a password to a file, we can deploy a simple version of password protection to file protection. Users are only allowed access if they know the correct password when the file is opened. The password can be required for any access or only for modifications (write access). Password access creates for a user the effect of having a different \"group\" for every file. However, file passwords suffer from difficulties similar to those of authentication passwords: • Loss. It's likely that no one will be likely to substitute a lost or forgotten account, dependent on how the passwords are implemented. Although operators or system administrators can unprotect or assign a password, they are frequently unable toidentify what password a user has provided; if the user forgets their password, a new one must be assigned. • Make use of. It can be cumbersome and time consuming to provide a password for each request to a file. • Disclosure is a term that refers to the act of disclosing information If an unauthorized person learns the password, the file becomes instantly available. If the user then updates the password to secure the file, all other authorized users must be notified of the new password, as their old one will no longer work. • Cancellation. Someone must reset the password to revoke one user's access to a file, creating the same difficulties as disclosure. Temporary Acquired Permission The Unix operating system provides an interesting permission scheme based on a three-level user ―group ―world hierarchy. The Unix designers added a permission called set user id (suid). If this protection is set for a file to be executed, the protection level is that of the file's owner, not the executor. To see how it works, suppose Tom owns a file and allows Ann to execute it with suid. When Ann executes the file, she has the protection rights of Tom, not of herself. This peculiar-sounding permission has a useful application. It permits a user to establish data files to which access is allowed only through specified procedures. 134

Let's say you want to start a computerized dating service that uses a database of people who are available on specific nights. Sue may be eager in a Saturday date, but she may have already turned down a request from Jeff due to prior commitments. Sue tells the operator that she doesn't want Jeff to know she's available. Sue, Jeff, and anyone who use the service must be able to read and write to the file (at least indirectly) in order to discover who is availableor to publish their availability. However, if Jeff obtains access to the file, he will discover thatSue has lied. As a result, your dating website must require Sue and Jeff (and everyone else) toobtain this file only through a software that screens the information Jeff acquires. Sue and Jeff, on the other hand, will never be able to add data into the file if its access is restricted to read and write by you, the file's owner. Unix SUID protection is the solution. You construct the database file, and you are the only one who has access to it. You must also develop the application that will connect to the database and save it with SUID protection. Then, when Jeff runs your software, he briefly gains your access permission, but only for the duration of the program's execution. Because your program handles the actual file access, Jeff never has direct access to the file. Jeff reclaims his own access rights and loses yours when he departs your program. As a result, your software can access the file, but it must only show Jeff the data that he is permitted to see. This mechanism is convenient for system functions that general users should be able to perform only in a prescribed way. Only the program should be allowed to change a user's password file, for example, but individual users should be possible to alter their own passwords at any time. A password change software can be owned by the system using the SUID functionality, giving it complete access to the system password table. The password-changing application also features SUID protection, which means that when a normal user runs it, the program can update the password file on the user's behalf in a tightly controlled manner. 4.6 USER AUTHENTICATION Much of an operating system's security is based on determining who the system's users are. People frequently request identification from strangers in real-life situations: Before processing a check, a bank clerk may request a driver's license, library personnel may requestidentification before charging out books, and immigration officials may want passports as proof of identity. In-person identification is usually easier than remote identification. 135

Some colleges, for example, do not report grades over the phone because the office staff may not know the students calling. A professor who recognizes a student's voice, on the other hand, has the authority to divulge that student's grades. Documents, language processing, finger print and retina identification, and other trustworthy ways of identity have all been developed over time by institutions. The options in computers are more constrained, and the possibilities are less secure. Anyone can attempt to login to a computing system. Besides a professor who can recognize a student's speech, a computer cannot distinguish between electrical signals from one individual and those from another. As a result, most computing authentication solutions must be based on some knowledge that is only shared between the computing system and the user. To prove a user's identity, authentication systems use one of three criteria. Something the user is aware of Passwords, PIN numbers, passcodes, a secret handshake, and a user's mother's maiden name are just a few examples of what they may know. 1. Something that the user possesses a People with identity badges, actual keys, a driver's license, or a uniform are typical examples of things that make them identifiable. 2. A characteristic of the user These authenticators, known as biometrics, are based on a user's bodily feature, such as a fingerprint, a person's speech pattern, or a visage (picture). These authentication mechanisms are not new (we know friends by their faces in person or bytheir voices on the phone), but they are just now being employed in computer authentication. Passwords as Authenticators: The most popular user-to-operating-system authentication strategy is a password, which is a \"word\" that both the computer and the user know. Despite the fact that password protection appears to be a highly safe mechanism, human practice can occasionally decrease its quality. In this section, we'll look at passwords, how to choose them, and how to use them for authentication. Finally, we discuss additional authentication mechanisms as well as issues with the authentication process, such as Trojan horses impersonating the computer authentication process. Use of Passwords: Passwords are mutually agreed-upon code words, assumed to be known only to the user and the system. In some cases, a user chooses passwords; in other cases, the system assigns them. The length and format of the password also vary from one system to another. Even though they are widely used, passwords suffer from some difficulties of use: 136

• Loss. Depending on how the passwords are implemented, it is possible that no one will be able to replace a lost or forgotten password. The operators or system administrators can certainly intervene and unprotect or assign a particular password, but often they cannot determine what password a user has chosen; if the user loses the password, a new one must be assigned. • Use. Supplying a password for each access to a file can be inconvenient and time consuming. • Disclosure. If a password is disclosed to an unauthorized individual, the file becomes immediately accessible. If the user then changes the password to reprotect the file, all other legitimate users must be informed of the new password because their old password will fail. • Revocation. To revoke one user's access right to a file, someone must change the password, thereby causing the same problems as disclosure. The use of passwords is straightforward. A user enters some piece of identification, such as a name or an assigned user ID; this identification can be available to the public or easy to guess because it does not provide the real security of the system. The system then requests password from the user. If the password matches that on file for the user, the user is authenticated and allowed access to the system. If the password match fails, the system requests the password again, in case the user mistyped. User Authentication: Authentication is the process of establishing an individual's identification to the required level of confidence. Any cryptography solution begins with authentication. We say this because encrypting what is being transmitted is pointless unless we know who is transmitting.The goal of encryption, as we all know, is to safeguard communication between two or more people. Encrypting the information travelling between the parties is pointless unless we are 100% certain that the parties are who they claim to be. Otherwise, there's a risk that an unauthorized person will gain access to the data. In cryptographic terms, we might express it this way: encryption is useless without authentication. 137

Remote User – Authentication principles In most computer security settings, user authentication is the fundamental building component and first line of protection. User authentication is the cornerstone for most sorts ofaccess control and user accountability. The Internet Security Dictionary defines user authentication as ―the method of authenticating an identification stated by or for a system entity‖. This process consists of two steps: Providing an identity to the security system is the first stage in the identification process. (Identifiers should be assigned with care because authenticated identities serve as the foundation for additional security services like access control.) Authentication data that validates the entity-identifier binding is presented or generatedduring the verification process. The user identifier ABTOKLAS, for example, could be assigned to Alice Toklas. This information must be saved on any server or computer system Alice desires to use, and system administrators and other users may have access to it. A password, which is kept secret, is a common piece of authentication information connected with this user ID (known only to Alice and to the system). If no one can figure out or guess Alice's password, managers can set up Alice's access privileges and audit herbehavior using her user ID and password combination. System users can send Alice email because her ID is not secret, but no one may claim to be Alice since her password is. In essence, user identification is the process of a user providing the system with a claimed identity, whereas user authentication is the process of proving the claim's authenticity. It's important to note that user authentication is not the same as message authentication. Message authentication, as defined, is a process that allows communication parties to ensure that the information of a received message have not been tampered with and that the source is genuine. This is purely a user authentication issue. The NIST Model for Electronic User Authentication (Electronic Authentication Guideline, August 2013) defines electronic user authentication as the process of establishing confidence in user identities that are presented electronically to an information system. Systems can use the authenticated identity to determine if the authenticated individual is authorized to perform particular functions, such as database transactions or access 138

to system resources. In many cases, the authentication and transaction or other authorized function takes place across an open network such as the Internet. Equally authentication and subsequent authorization can take place locally, such as across a local area network. SP 800-63-2 defines a general model for user authentication that involves a number of entities and procedures. We discuss this model with reference to Figure 3.8. The customer must first be associated with the system before he or she may complete user authentication. The preceding is a typical registration procedure. To become a subscriber of a credentials service provider, an applicant must apply to a registration authority (RA) (CSP). The RA is a trusted entity that identifies and verifies the identity of a CSP applicant in this architecture. The CSP then has a conversation with the subscriber. The CSP issues an electronic credential to the subscriber based on the details of the entire authentication system. The credential is a data structure that confidently connects a subscriber's identification and additional attributes to a token that can be confirmed when submitted to the verifier during an authentication operation. An encryption key or an encrypted password that recognizes the subscriber could be used as the token. The CSP may issue the token, the subscriber may produce it directly, or a third party may offer it. Following authentication events, the token and credential can be used. Fig. 4.18 The NIST SP 800-63-2 E-Authentication Architectural Model Once a user is registered as a subscriber, the actual authentication process can take place between the subscriber and one or more systems that perform authentication and, subsequently, 139

authorization. The person who has to be verified is known as a claimant, and the person who verifies that identification is known as a verifier. When a claimant effectively uses an authentication protocol to establish possession and control of a token to a verifier, the verifier can confirm that the claimant is the subscriber listed in the related credential. The verifier sends the relying party a claim about the subscriber's identity (RP). This assertion comprises subscriber identifiable details such as the subscriber's name, a registration identifier, or other subscriber features that were confirmed throughout the verification process. The RP can determine access control or authorization choices based on the verified information provided by the verifier. Although a real-world authentication system will differ from or be more complicated than this simplified model, it does highlight the key responsibilities and functions required for a safe authentication system. Means of Authentication There are four general means of authenticating a user‘s identity, which can be used alone or in combination: • Something the individual knows: Examples include a password, a personal identification number (PIN), or answers to a prearranged set of questions. • Something the individual possesses: Examples include cryptographic keys, electronic key cards, smart cards, and physical keys. This type of authenticator is referred to as a token. • Something the individual is (static biometrics): Examples include recognition by fingerprint, retina, and face. • Something the individual does (dynamic biometrics): Identification by voice tone, handwriting features, and typing rhythm are all examples. When correctly implemented and deployed, all of these approaches can enable safe user authentication. Each system, however, has flaws. A password could be guessed or stolen by an opponent. An adversary may be able to establish or steal a token in the same way. A user can forget their password or misplace their token. Moreover, thereis a large administrative burden associated with maintaining and securing password and token data on systems. There are a number of issues with biometric authenticators, involving managing with 140

false positives and false negatives, user acceptance, cost, and convenience. For network-based user authentication, the most important methods involve cryptographic keys and something the individual knows, such as a password. Mutual Authentication Mutual authentication techniques are a significant application area. These protocols allow interacting parties to verify each other's identities and exchange session keys. The emphasis was on key distribution there. We'll come back to this subject later to discuss the ramifications of authentication in a broader context. Two difficulties are at the heart of the difficulty of authenticated key exchange: confidentiality and timeliness. Essential identity and session-key data must be sent in encrypted form to prevent masquerade and compromise of session keys. This necessitates the presence of secret or public keys that can be utilized in this manner. Because of the potential of message replays, the second concern, timeliness, is critical. In the worst-case scenario, such replays could allow an adversary to obtain a session key or successfully spoof another party. At the very least, a successful replay might cause operations to be disrupted by presenting parties with messages that look to be legitimate but aren't. Lists the following examples of replay attacks: 1. The most basic replay assault is when an opponent captures a message and then broadcasts it later. 2. Inside the valid time limit, an adversary can repeat a timestamped communication. This incident can be recorded if both the original and the replay arrive within the specified time frame. 3. An adversary, like in example (2), can replay a timestamped message inside the valid time window while suppressing the original message. As a result, the recurrence is undetectable. 4. A backward replay without alteration is used in another assault. This is a retort to the sender of the message. This attack is viable if symmetric encryption is used and the sender is unable to quickly distinguish among sent and received messages based on content. Attaching a sequence number to each communication used in an authentication exchange is one way to deal with replay attempts. Only if the sequence number is in the correct order is a new message acknowledged. The disadvantage of this strategy is that it requires each party to 141

maintain track of the most recent sequence number for each claimant with whom it has engaged. Sequence numbers are rarely used for authentication or key exchange due to the general overhead. Instead, one of the two broad procedures listed below is used: Timestamps: Party A recognizes a message as new only if it includes a timestamp that, in A's opinion, is close enough to A's present understanding of time. This method necessitates the synchronization of clocks among the many participants. Expecting a new message from Party B, Party A first sends B a nonce (challenge) and demands that the following message (response) received from B include the right nonce value. Due to the obvious inherent challenges with this strategy, it may be claimed that the timestamp approach should not be utilized for connection-oriented applications. To begin, some type of protocol is required to keep the multiple processor clocks in sync. This protocol must be fault tolerant in order to deal with network problems, as well as secure in order to deal with hostile attacks. Second, if there is a temporary loss of synchronization due to a defect in one of the parties' clock mechanisms, the possibility for a successful assault will arise. Finally, distributed clocks cannot be anticipated to maintain accurate synchronization due to the changeable and unpredictable nature of network delays. As a result, any timestamp-based operation must provide for a window of time that is both broad enough to handle network delays and small enough to limit the risk of an attack. The challenge-response strategy, on the other hand, is unsuitable for a connectionless application since it necessitates a handshake first before connectionless communication, essentially undermining the main feature of a connectionless operation. For such applications, relying on a reliable time server and making a persistent effort by each side to maintain their clocks in sync may be the best option (e.g., [LAM92b]). One-Way Authentication Electronic mail is one use where encryption is becoming more widespread (email). The major purpose of electronic mail is that it does not require both the sender and the recipient to be online at the same time. Meanwhile, the email message is forwarded to the recipient'selectronic mailbox, where it is held in a queue until the recipient is ready to read it. The email message's \"envelope\" or header must be in the clear in order for the message to be processed by a store-and-forward email protocol like the Simple Mail Transfer Protocol (SMTP) or X.400. Therefore, it is typically preferable that the mail-handling protocol does 142

not require access to the plaintext form of the message, as this would imply that the mail- handling method must be trusted. As a result, the electronic message should be encrypted such that the decryption key is not in the hands of the mail-handling system. Authentication is the second prerequisite. In most cases, the recipient requires proof that the communication came from the supposed sender. Remote User-Authentication using Symmetric Encryption Mutual Authentication As previously mentioned, in a dispersed context, a two-level hierarchy of symmetric encryption keys can be employed to provide confidentiality for communication. In general, this technique entails utilizing a reputable key distribution center (KDC). The KDC and each party in the network share a private key, known as a master key. The KDC is in charge of creating session keys, which are used for a limited duration over a connection between two parties, and distributing those keys using master keys to secure the distribution. This method is widely used. Secret key distribution using a KDC, includes authentication features. The protocol can be summarized as follows. Needham and Schroeder [NEED78] Secret keys Ka and Kb are shared between A and the KDC and B and the KDC, respectively. The protocol's goal is to securely distribute a session key Ks to A and B. In step 2, Entity A obtains a new session key in a secure manner. Only B can decrypt the message in step 3 and so understand it. Step 4 reflects B's information of Ks, and step 5 assures B of A's knowledge of Ks, as well as ensuring B that this is a new messagedue to the nonce N2. Steps 4 and 5 are designed to avoid a certain form of replay assault. In instance, if an opponent is able to capture and replay the message from step 3, operations at Bmay be disrupted in some way. The protocol is still susceptible to a replay attack, notwithstanding the handshake of stages 4 143

and 5. Assume that an adversary, X, has gained access to an old session key. To be sure, this is a far more unusual scenario than an opponent merely observing and recording step 3. It is, unfortunately, a possible security issue. By merely repeating step 3, X may imitate A and deceive B into using the old key. B will be unable to detect that this is a replay until B remembers all previous session keys used with A indefinitely. If X is able to intercept the handshake communication in step 4, it will be able to imitate A's answer in step 5. From this point on, X can send bogus messages to B that appear to B to come from A using an authenticated session key. Denning suggests a change to the Needham/Schroeder method that includes adding a timestamp to stages 2 and 3 to avoid this flaw. Her approach, which presupposes that the master keys, Ka and Kb, are valid, comprises of the stages below. The portion to the left of the colon indicates the sender and the receiver; the portion to the right indicates the contents of the message; the symbol || indicates concatenation. T is a timestamp that assures A and B that the session key has only just been generated. Thus, both A and B know that the key distribution is a fresh exchange. A and B can verify timeliness by checking that where ∆t1 is the estimated normal discrepancy between the KDC‘s clock and the local clock (at A or B) and ∆t2 is the expected network delay time. Each node can set its clock against some standard reference source. Because the timestamp T is encrypted using the secure master keys, an opponent, even with knowledge of an old session key, cannot succeed because a replay of step 3 will be detected by B as untimely. A final point: Steps 4 and 5 were not included in the original presentation but were added. These steps confirm the receipt of the session key at B. The Denning protocol seems to provide 144

an increased degree of security compared to the Needham/Schroeder protocol. However, a new concern is raised: namely, that this new scheme requires reliance on clocks that are synchronized throughout the network. Risk is based on the fact that the distributed clocks can become unsynchronized as a result of sabotage on or faults in the clocks or the synchronization mechanism. The problem occurs when a sender‘s clock is ahead of the intended recipient‘s clock. In this case, an opponent can intercept a message from the sender and replay it later when the timestamp in the message becomes current at the recipient‘s site. This replay could cause unexpected results. Gong refers to such attacks as suppress-replay attacks. One way to counter suppress-replay attacks is to enforce the requirement that parties regularly check their clocks against the KDC‘s clock. The other alternative, which avoids the need for clock synchronization, is to rely on handshaking protocols using nonces. This latter alternative is not vulnerable to a suppress-replay attack, because the nonces the recipient will choose in the future are unpredictable to the sender. The Needham/Schroeder protocol relies on nonces only but, as we have seen, has other vulnerabilities. An attempt is made to respond to the concerns about suppress replay attacks and at the same time fix the problems in the Needham/Schroeder protocol. Subsequently, an inconsistency in this latter protocol was noted and an improved strategy was presented. The protocol is Let us follow this exchange step by step. 1. A starts the authentication process by creating a nonce, Na, and transmitting it to B in plaintext along with its identification. This nonce will be sent to A in an encrypted text with the session key, guaranteeing A of its accuracy. 2. B notifies the KDC of the need for a session key. It includes its identification and a nonce, Nb, in its communication to the KDC. This nonce will be sent to B in a secret text with the session key, guaranteeing B of its accuracy. A block encrypted with the secret key shared by B and the KDC is included in B's transmission to the KDC. 145

This block instructs the KDC to give credentials to A; it provides the intended destination of the credentials, a recommended credential expiration time, and the nonce received from A. 3. B's nonce and a block encoded with the private key that B shares with the KDC are passed to A by the KDC. As will be seen, the block acts as a \"ticket\" that A can use for further authentications. A receives a block from the KDC that is encrypted with the private key that both A and the KDC share. This block confirms that B has received A's initial message (IDB) and that it is a timely message rather than a replay (Na), as well as giving A session key (Ks) and the timeframe forusing it (Tb). 4. A sends the ticket to B, along with the nonce for B, which is encrypted with the session key. B receives the secret key from the ticket, which is used to decode E (Ks, Nb) and extract the nonce. Because B's nonce is encoded with the session key, the message is authenticated as coming from A and not a replay. This protocol allows A and B to establish a secure session using a secure session key in an efficient and secure manner. Furthermore, the protocol leaves A with a key that can be used to authenticate B in the future, eliminating the need to visit the authentication server many times. Assume that A and B start a session and subsequently terminate it using the aforementioned protocol. Following that, but within the protocol's time restriction, A requests a new session with B. The procedure that follows is as follows: When B receives the message in step 1, it verifies that the ticket has not expired. The newly generated nonces Na = and Nb = assure each party that there is no replay attack. In all the foregoing, the time specified in Tb is a time relative to B‘s clock. Thus, this timestamp does not require synchronized clocks, because B checks only self-generated timestamps. The decentralized key distribution scenario was shown to be impracticable when using symmetric encryption. The sender must first send a request to the intended recipient, then wait for an answer that contains a session key before sending the message. The KDC strategy described for a candidate for encrypted e - mails should be improved. Steps 146

4 and 5 must be deleted in order to avoid needing the recipient (B) to be online at the same time as the sender (A). The following is the sequence for a communication with text M: Only the intended receiver of a communication will be capable of reading it using this method. It also adds a layer of assurance that the sender is A. The protocol does not guard against replays, as stated. A timestamp could be included with the message to provide some protection. Due to the possibility of lags in the email procedure, such timestamps may be of limited utility. 4.7 SECURITY POLICIES The development of a security policy is the first stage in developing security services and systems. The phrase \"security policy\" is used in a variety of ways by those concerned in computer security. A security policy is, at the very least, an informal statement of expected system behavior [NRC91]. These informal policies may include safety, integrity, and availability requirements. A security policy, to put it another way, is a formal statement of rules and practices that specifies or regulates how a system or organization offers security services to safeguard sensitive and essential system resources (RFC 2828). The system's technical controls, as well as its management and operational controls, can implement such a formal security policy. In developing a security policy, a security manager needs to consider the following factors: • The value of the assets being protected • The vulnerabilities of the system • Potential threats and the likelihood of attacks Further, the manager must consider the following trade-offs: • Ease of use versus security: Virtually all security measures involve some penalty in the area of ease of use. The following are some examples. Access control mechanisms require users to remember passwords and perhaps perform other access control actions. 147

Firewalls and other network security measures may reduce available transmission capacity or slow response time. Virus-checking software reduces available processing power and introduces the possibility of system crashes or malfunctions due to improper interaction between the security software and the operating system. • Security costs vs. failure and recovery costs: There are direct financial costs in establishing and maintaining security measures, in addition to convenience of use and performance expenses. All of these expenses must be weighed against the cost of security failure and recovery if certain safeguards are not in place. The cost of security flaw and restoration must account for not only the value of property being safeguarded and the losses caused by a security breach, but also the danger, which is the likelihood that a specific threat would exploit a specific vulnerability with a specific detrimental outcome. Security Implementation Security implementation involves four complementary courses of action: • Prevention: An ideal security scheme is one in which no attack is successful. Although this is not practical in all cases, there is a wide range of threats in which prevention is a reasonable goal. For example, consider the transmission of encrypted data. If a secure encryption algorithm is used, and if measures are in place to prevent unauthorized access to encryption keys, then attacks on confidentiality of the transmitted data will be prevented. • Detection: In a number of cases, absolute protection is not possible, but it is practical to identify security attacks. For example, there are intrusion detection systems designed to detect the presence of unauthorized individuals logged onto a system. Another example is detection of a denial of service attack, in which communications or processing resources are consumed so that they are unavailable to legitimate users. • Response: If security mechanisms detect an ongoing attack, such as a denial of service attack, the system may be able to respond in such a way as to halt the attack and prevent further damage. • Recovery: An example of recovery is the use of backup systems, so that if data integrity is compromised, a prior, correct copy of the data can be reloaded. 148

Assurance and Evaluation Those who are \"consumers\" of information security services and procedures (for example, system managers, suppliers, clients, and end users) want to believe that the security protections in place are effective. That is, safety customers want to know that their systems' security architecture meets security criteria and adheres to security regulations. As a result of these concerns, we arrive at the ideas of assurance and assessment. The NIST Computer Security Handbook [NIST95] describes confidence as the level of trust that security precautions, including technical and operational, are working as intended to secure the system and the data it processes. This includes both the design and execution ofthe system. As a result, assurance is concerned with the following questions: \"Does the security system design fulfil its requirements?\" and \"Does the security system implementation meet its requirements?\" It's worth noting that confidence refers to a level of confidence rather than a formal evidence that a concept or implementation is correct. With the current state of the art, moving from an amount of assurance to definitive evidence is extremely difficult, if not unachievable. There has been a lot of effort put into establishing formal models that describe objectives and characterize ideas and implementations, as well as logical and mathematical strategies for dealing with these problems. However, certainty remains a matter of degree. The process of analyzing a computer product or system against a set of criteria is known as evaluation. Testing and formal analytic or mathematical procedures may be used in evaluation. The development of evaluation criteria that can be used to any security system (including security services and procedures) and are widely supported for making product comparisons is at the heart of this effort. 4.8 MODELS OF SECURITY We start with the concept of a system resource, or asset, that users and owners wish to protect. The assets of a computer system can be categorized as follows: • Hardware: Including computer systems and other data processing, data storage, and data communications devices 149

• Software: Including the operating system, system utilities, and applications • Data: Including files and databases, as well as security-related data, such as password files. • Communications facilities and networks: Local and wide area network communication links, bridges, routers, and so on. A Network Security Model demonstrates how the security service has been developed over the organization to avoid the opponent from jeopardising the secrecy or authenticity of the data being transferred over the network. In this section, we'll look at the general 'network security model,' in which we'll look at how messages are safely shared between sender and recipient via the network. We'll also go over the 'network access security paradigm,' which is designed to protect your system from unauthorised network access. There must be a transmitter and a receiver for a message to be delivered or received. Both the sender and the receiver must agree that the message should be shared. A medium, such as an Internet service, is now required for the communication process from sender to receiver. A logical path is constructed over the network (Internet) from the sender to the receiver, and both the transmitter and the recipient established connection using the communication protocols. We are worried about the protection of a message delivered over the network if the message contains private or authentic information that is vulnerable to an adversary located at the information channel. The three factors outlined below would be present in any security service: 1. Data that must be transformed before being given to the recipient. As a result, anyopponent who happens to be present at the information channel won't be able to see the message. This shows the message's encryption. It also includes the inclusion of code to the data during conversion, which will be used to validate the identification of the genuine recipient. 2. Classified intelligence sharing between sender and recipient, which the adversary must not be aware of. Yes, we're talking about the encryption key, which is used both at the sender's end and at the receiver's end during message encryption and decryption. 3. There must be a trusted third party who is in charge of disseminating the secret information 150


Like this book? You can publish your book online for free in a few minutes!
Create your own flipbook