The labeling of the variables, the importance of the order of the variables, and a calculable difference between the variables are characteristics of three other variable measurement scales that are accommodated by the ratio scale (which are usually equidistant). The ratio scale lacks negative values since there is an actual zero value. The researcher must assess if the variables contain all the properties of an interval scale, including the presence of the absolute zero value, before deciding when to employ a ratio scale. The ratio scale can be used to calculate mean, mode, and median. Ratio Scale Examples The following questions fall under the Ratio Scale category: What is your daughter’s current height? Less than 5 feet. 5 feet 1 inch – 5 feet 5 inches 5 feet 6 inches- 6 feet More than 6 feet All can be summarized in the table below: Table No 10.1 Offer 10.3 SUMMARY Measurement is the foundation of any research. The most important distinction between qualitative and quantitative research studies is in the forms of measurement employed to get information from the respondents, in addition to the ideologies and 151 CU IDOL SELF LEARNING MATERIAL (SLM)
philosophical foundations of each mode of inquiry. In contrast to quantitative research, which often seeks answers on one of the measuring scales, qualitative research typically uses descriptive statements to do so (nominal, ordinal, interval or ratio). If one of the scales is not used to acquire a piece of information during data collection, it is converted into a variable during analysis by utilizing one of the scales. Nominal scales contain the least amount of information. In nominal scales, the numbers assigned to each variable or observation are only used to classify the variable or observation. For example, a fund manager may choose to assign the number 1 to small-cap stocks, the number 2 to corporate bonds, the number 3 to derivatives, and so on. Ordinal scales present more information than nominal scales and are, therefore, a higher level of measurement. In ordinal scales, there is an ordered relationship between the variable’s observations. For example, a list of 500 managers of mutual funds may be ranked by assigning the number 1 to the best-performing manager, the number 2 to the second best-performing manager, and so on. With this type of measurement, one can conclude that the number 1-ranked mutual fund manager performed better than the number 2-ranked mutual fund manager. Ratio scales are the most informative scales. Ratio scales provide rankings, assure equal differences between scale values, and have a true zero point. In essence, a ratio scale can be thought of as nominal, ordinal, and interval scales combined as one. 10.4 KEYWORDS Hypothesis: A hypothesis is a hunch, assumption, suspicion, assertion or an idea about a phenomenon, relationship or situation, the reality or truth of which you do not know and you set up your study to find this truth. A researcher refers to these assumptions, assertions, statements or hunches as hypotheses and they become the basis of an enquiry. In most studies the hypothesis will be based either upon previous studies or on your own or someone else’s observations. Interval scale: The interval scale is one of the measurement scales in the social sciences where the scale is divided into a number of intervals or units. An interval scale has all the characteristics of an ordinal scale. In addition, it has a unit of measurement that enables individuals or responses to be placed at equally spaced 152 CU IDOL SELF LEARNING MATERIAL (SLM)
intervals in relation to the spread of the scale. This scale has a starting and a terminating point and is divided into equally spaced units/intervals. The starting and terminating points and the number of units/intervals between them are arbitrary and vary from scale to scale as it does not have a fixed zero point. Nominal scale: The nominal scale is one of the ways of measuring a variable in the social sciences. It enables the classification of individuals, objects or responses based on a common/shared property or characteristic. These people, objects or responses are divided into a number of subgroups in such a way that each member of the subgroup has the common characteristic Ordinal scale: An ordinal scale has all the properties of a nominal scale plus one of its own. Besides categorising individuals, objects, responses or a property into subgroups on the basis of a common characteristic, it ranks the subgroups in a certain order. They are arranged in either ascending or descending order according to the extent that a subcategory reflects the magnitude of variation in the variable. Ratio scale: A ratio scale has all the properties of nominal, ordinal and interval scales plus its own property; the zero point of a ratio scale is fixed, which means it has a fixed starting point. Therefore, it is an absolute scale. As the difference between the intervals is always measured from a zero point, arithmetical operations can be performed on the scores. 10.5 LEARNING ACTIVITY 1.What is nominal scale? ___________________________________________________________________________ ___________________________________________________________________________ 2. State the meaning of interval scale? ___________________________________________________________________________ ___________________________________________________________________________ 3. List the various levels of measurement ___________________________________________________________________________ ___________________________________________________________________________ 4. What is ratio scale? 153 CU IDOL SELF LEARNING MATERIAL (SLM)
___________________________________________________________________________ ___________________________________________________________________________ 10.6 UNIT END QUESTIONS A. Descriptive Questions Short Questions: 1. What is nominal scale? 2. Explain with illustration the term interval scale. 3. Explain with illustration the term ranking scale. 4. Write a note on ratio scale. Long Questions: 1. Explain all the levels of measurement in detail. 2. What is difference between interval scale and ratio scale 3. Write a note on ordinal scale with detailed illustration B. Multiple Choice Questions 1. An ______ scale is a numerical scale in which both the order and the difference between the variables are known. a. interval b. ranking c. ratio d. ordinal 2. A _______ scale is a type of variable measurement scale that, in addition to producing the order of the variables, also makes the difference between the variables and the value of true zero known. a. ratio b. interval c. Ordinal d. classificatory 154 CU IDOL SELF LEARNING MATERIAL (SLM)
3. The _______ scale lacks negative values since there is an actual zero value. a. ratio b. interval c. ordinal d. nominal 4. ________ scale is often used in research surveys and questionnaires where only variable labels hold significance. a. Nominal b. Ranking c. Ratio d. Interval Answers 1-a, 2-a, 3-a, 4-a 10.7 REFERENCES References book Shukla, Satishprakash, (2020) Research Methodology and Statistics. Ahmedabad: Rishit Publications. Shukla, Satishprakash, (2014) Research – An Introduction (Gujarati) Ahmedabad: KshitiPrakashan Kothari C.R. : Research Methodology, New Age International, 2011. Shajahan S. : Research Methods for Management, 2004. Thanulingom N : Research Methodology, Himalaya Publishing C. Rajendar Kumar : Research Methodology , APH Publishing Kumar Ranjit: Research Methodology: A Step by Step Guide for Beginners, Sage Publication, 2014 Website https://www.questionpro.com/blog/nominal-ordinal-interval-ratio https://conjointly.com/kb/levels-of-measurement/ https://careerfoundry.com/en/blog/data-analytics/data-levels-of-measurement/ 155 CU IDOL SELF LEARNING MATERIAL (SLM)
UNIT - 11MEASUREMENT SCALE-I STRUCTURE 11.0Learning Objectives 11.1Introduction 11.2Characteristics of a measurement scale 11.3Scale and questionnaire development 11.4Summary 11.5Keywords 11.6Learning Activity 11.7Unit End Questions 11.8References 11.0 LEARNING OBJECTIVES After studying this unit, you will be able to: Describe nature of measurement scale Identify Scale development process State the process of questionnaire development 11.1 INTRODUCTION The various ways that variables are specified and categorized into distinct groups are known as scales of measurement in research and statistics. It refers to the kind of values given to the variables in a data set and is occasionally referred to as the degree of measurement. Measurement and scale are two keywords in statistics from which the term \"scale of measurement\" is generated. Measurement is the process of writing down observations gathered for the study. The assignment of things to numbers or semantics is known as scaling. The relationship between the designated objects and the observed data is denoted by these two words together. In statistics, data variables are described or quantified using a measuring scale. It establishes the kind of statistical analysis approaches to be employed. 156 CU IDOL SELF LEARNING MATERIAL (SLM)
The type of measurement scale to be used for statistical measurement depends on the type of data being collected. There are various types of measurement scales. The nominal scale, ordinal scale, interval scale, and ratio scale are the four measurement scales that make up this set.Both qualitative and quantitative data are measured using the measuring scales. In contrast to how interval and ratio scales are used to measure quantitative data, nominal and ordinal scales are used to measure qualitative data. 11.2 CHARACTERISTICS OF A MEASUREMENT SCALE 1. Identity Identity is the process of allocating numerical values to each variable's values in a set of data. Consider a questionnaire that offers the alternatives Male and Female for the respondent's gender. Males and females can be represented by the numbers 1 and 2, respectively. These values can only be used for identification, hence arithmetic operations cannot be applied to them. An attribute of a nominal scale is this. 2.Magnitude The size of a measurement scale is referred to as the magnitude, and the identity is a set of numbers with a natural order of least to greatest. Typically, they are shown on the scale in either ascending or descending order. For instance, the order of finishers in a race is first, second, and third to last. Because it possesses both identity and magnitude, this example is evaluated on an ordinal scale. 3. Equal intervals The scale with a standardized arrangement is referred to as having equal intervals. In other words, there is no difference between the levels on the scale. The example of the ordinal scale highlighted above does not fit this description. There are not equal interval differences between each position. In a race, the person in first place may finish in 20 seconds, the person in second place in 20.8 seconds, and the person in third place in 30 seconds. A variable that has an identity, magnitude, and equal interval is measured on an interval scale. 4. Absolute zero 157 CU IDOL SELF LEARNING MATERIAL (SLM)
Absolute zero is defined as the feature that is unique to a ratio scale. It means that there is an existence of zero on the scale, and is defined by the absence of the variable being measured (e.g. no qualification, no money, does not identify as any gender, etc. 11.3 SCALE AND QUESTIONNAIRE DEVELOPMENT Scale development takes a lot of effort and research, so we've included in this toolkit a few reliable scales you can use that were created after careful investigation. Scales have the benefit of being more accurate and trustworthy when examining the underlying theme that they are intended to gauge (McIver & Carmines, 1981). For instance, you might include a scale measuring students' self-efficacy in your questionnaire. This scale, which has five items, would measure how much students believe they can achieve in accomplishing academic goals. Use these student self-efficacy scales that we have developed from already- existing, previously validated scales, as we strongly encourage you to. Gehlbach and Brinkworth synthesize several known survey design practices and created a new rigorous and reliable process to design survey scales. This process relies on both potential survey participants as well as experts in the field as an effort to reduce measurement effort and increase the validity of new survey scales. The six-step process is as follows: Step 1: Literature review To precisely describe the construct in respect to the literature on the subject and to determine how current measures of the construct (or related constructs) might be helpful in the construction of a new scale, the first step is to review the literature. For instance, you would want to review the literature to assist you define self-efficacy in the literature if you were creating a new scale to assess educational self-efficacy. Step 2: Potential respondents are contacted for interviews and focus groups to determine whether their conceptions of the construct align with the newly revised conceptualization. From the population of target respondents, you must select a sample of participants for your interviews or focus groups (e.g., a sample of Imperial students, if the scale in question is to be administered to Imperial students). This stage will assist you in determining whether the way academics define educational self-efficacy aligns with how Imperial students conceptualize educational self-efficacy, using the example of developing a new educational self-efficacy scale once more. Step 3: To resolve the discrepancies 158 CU IDOL SELF LEARNING MATERIAL (SLM)
To resolve the discrepancies between academic and lay understandings of the in question construct, synthesize the literature review with evidence from focus groups and interviews. You can employ the terminology of the respondents in this phase when researchers and respondents conceptually agree but characterize the indicators or sub-themes differently. This will assist you in compiling a list of sub-themes or indicators around which you may develop your items in Step 4. Step 4: Creation of preliminary items. This step's objective is to create items that correspond to the indications, or themes, that emerged from the analysis of the literature and the interview-focus group data. It is wise to create a few more elements in this phase than you want to include in your final scale. For instance, create a preliminary list of 8–9 items if your final scale on educational self–efficacy would have 5 elements. Please refer to the part above titled \"Top Tips for Developing Items and Response Options.\" Step 5: Professional validation of the initial items. It is advisable to go back to your academic audience and request participation from subject- matter experts in an online survey so they may assess the items and offer suggestions for improvement (or eliminated altogether). This stage will assist in making sure that the items you designed align with how you conceptualized the construct (such as educational self- efficacy) and may provide you with additional input on any signs that could be lacking. Make sure to give subject matter experts in the construct of interest your definition of the construct when you contact them, and ask them to comment on how applicable each item you created is to the construct and to identify any significant aspects of the construct that are not reflected in your items. Depending on the advice you get from the experts, you can then modify or remove items. Step 6: Cognitive pretesting interviews Cognitive pretesting interviews to see if respondents consistently perceive the remaining items the way you intend. You will ask prospective responders to do two tasks during each one-on-one interview: 1) restate each item in their own words (avoiding using any of the words in the item itself), and 2) think aloud while they determine each question's response. This procedure can assist you in identifying distinct trends from numerous responders about any problematic problems and offer guidance on how to modify things (e.g., adapting the vocabulary of the item to make it easier to understand) 159 CU IDOL SELF LEARNING MATERIAL (SLM)
The Questionnaire Development Process Step 1: Determine the Survey Objectives, Resources, and Time Constraints Once the decision has been made to conduct a survey, the marketer and marketing researchers must agree on the survey objectives, or what information the survey is to collect. In addition to establishing the goals of the survey, a budget and timetable must be established. Step 2: Determine How The Questionnaire Will Be Administered researchers can administer surveys in a variety of ways. Researchers administer surveys online, through the mail, on the telephone, or by face-to-face interviews. Each method has its strengths and weaknesses: g. Personal Interviews In the past, doing in-person interviews was the most popular way to conduct surveys. Inquirers frequently visited communities in the 1970s and knocked on possible respondents' doors. Mall-intercept interviews are now used by researchers when they do a face-to-face interview and questionnaire combination. At shopping malls, questions are asked during mall-intercept interviews. A shopper who appears to fit the description of the ideal respondent is approached by the interviewer. After the interviewer has determined that the potential respondent is a suitable responder, the interview can be conducted right away or the subject may be asked to complete the questionnaire at a location in the mall. Interviews at shopping centers are fairly common. Nearly two thirds of marketing research surveys, according to estimates, are completed at malls. Mall intercept interviews are particularly suitable when the research requires the demonstration of stimuli or other elements. They enable the use of visual stimuli and taste tests by researchers. Mall-intercept interviews have the drawbacks of being pricy, being restricted to urban regions, and maybe not being demographically representative of the population of interest. Mall-intercept interviews could overrepresent young women, suburbanites, those with middle-class salaries, and consumers. Malls may not be the most comfortable places for people to answer questions, which may limit the capacity to collect respondents' sentiments in mall-intercept surveys. h. Telephone polls 160 CU IDOL SELF LEARNING MATERIAL (SLM)
Another way to administer a survey is over the phone. Potential respondents are reached using this strategy over the phone. A random dialing system or other computerized methods are used to choose the phone numbers, ensuring the random selection of the respondents and the best chance of reaching them at home. This technique is frequently applied in polls of public opinion. High response rates for telephone interviews allow the researchers to follow up with additional questions. They are, nevertheless, costly to carry out. The inability to incorporate visual cues in the research is a drawback of telephone surveys. i. Mail Surveys The questionnaires for mail surveys are sent to respondents by mail. Ad Hoc Mail Surveys and Mail Panels are two types of mail surveys that are used by market researchers. There is no interviewer involved because the responder administers these surveys themselves. Ad hoc mail surveys involve mailing questionnaires to unrelated participants who are chosen at random. These names might have come from mailing lists that were purchased. These potential respondents are contacted once by marketing researchers. Pre-screened respondents are included in mail panel polls. Members of the mail panel have consented to take part in periodic polls. Mail surveys are generally inexpensive to perform, despite the need to pay for postage to and from respondents as well as the price of printing the questionnaires. However, mail surveys might have low response rates and take a while for respondents to reach the researchers. Non- responders are frequently not distributed equally throughout the sample. High-income and highly educated respondents are less likely to respond to surveys sent via mail. The results may be skewed due to the unequal distribution of non-responders. Step 3: Determine the Question Format a. Open-Ended Questions Open-ended questions are like the questions used with Exploratory Research. Respondents answer the question using their own words. Open-ended questions to not contain a set list of answers. Here is an example of an open-ended question: 161 CU IDOL SELF LEARNING MATERIAL (SLM)
Many questionnaires end with an open-ended question. Here is an example of such a question: b. Closed-Ended Questions In questionnaires, closed-ended questions predominate. Closed-ended questions offer the respondent a predetermined set of responses. Closed-ended programming is quicker and simpler. Closed-ended questions can be easily handled by interviewers with less experience. Additionally, closed-ended questions reduce the likelihood of interviewer bias. However, there may not be a wide enough range of responses to these questions. And if they are worded incorrectly, prejudice may be introduced. A. Dichotomous Questions The simplest form of closed-ended questions is the dichotomous question. Dichotomous questions ask the respondent to select from two possible answers. Here are some examples: Table no 11.1 Dichotomous Questions A. Multiple-Choice Questions There are two forms of multiple-choice questions: Multiple-Choice and Multiple-Answer. Multiple-Choice Questions: With multiple-choice questions, respondents select one answer from a list of three or more options. Here are some examples of multiple-choice questions: Which of the following age groups are you in? Check the appropriate box: 162 CU IDOL SELF LEARNING MATERIAL (SLM)
Table No 11.2 Numbers Of Brushing B. Scaled Response Questions Scaled response questions are designed to capture the intensity of a respondent's feelings and attitudes. We covered numerous forms of scaled response questions in the Measurement module. These scales include: Graphic Ratings Scales Itemized Ratings Scales Semantic Differential Scales Stapel Scales Likert Scales Step 4: Writing Clear Questions 1. Questions must be clearly written, easily understood, and unambiguous. 2. Questions must not introduce bias. 3. Not Imposing Assumptions 4. Respondents' Ability to Answer Step 5: Designing the Question Flow Not only do researchers spend a lot of time writing and rewriting each question, they also must devote considerable thought to the logical flow of the questions. When considering the flow of the questions, researchers want to make certain that they: 163 CU IDOL SELF LEARNING MATERIAL (SLM)
1. Avoid responses from unqualified respondents 2. Make respondents feel confortable so they answer the questions honestly 3. Ask questions that provide all the information they need Many researchers organize their questionnaires into six parts: Part 1: Introduction Part 2: Initial Screening of respondents Part 3: Warm-up questions Part 4: Transition into more detailed and more difficult questions Part 5: Demographics, Psychographics, Usage Behavior And Questions That Might Cause Embarrassment Part 6: Farewell Step 6: Questionnaire Evaluation At this stage, the questionnaire goes through its first evaluation. Researchers focus this evaluation on three questions: Question 1: Is this question necessary? Once there is a rough draft of the questionnaire, researchers and their clients use their judgment to review the questionnaire. An essential part of the process is to eliminate redundant questions. One researcher I worked with used to say about questionnaires that you \"start fat, and work to get thin.\" What he meant is that you write a lot of questions and then you start to eliminate many of them. Each question must serve a purpose. Questions must relate directly to the survey objectives. If a question does not have a strong link to the survey objectives, the researchers consider eliminating it. Question 2: Is the questionnaire too long? 164 CU IDOL SELF LEARNING MATERIAL (SLM)
If a questionnaire takes too long to complete, it will not be effective. Researchers often play the role of respondent to determine how long it takes to complete the questionnaire. Questionnaires administered online should take respondents about five minutes to complete, and in no case should it take longer than seven minutes. Questionnaires administered on the telephone or using mall-intercepts should take less than 20 minutes to complete. These surveys may be longer if respondents are told that they are going to be compensated with a valuable premium once they complete the questionnaire. Question 3: Will the questionnaire provide all the needed information? Researchers link each question to a survey objective to ensure that the questionnaire meets its objectives. Step 7 :Obtain client approval Market researchers have a duty to their customers. Keeping clients informed is a sign of excellent client service. Obtaining approval from all stakeholders with decision-making authority is a crucial step in the questionnaire development process. If the questionnaire is rejected, it needs to be updated, or the project as a whole needs to be reevaluated. Step 8: Pretest the survey and make revisions By putting the questionnaire to the test on a small sample, researchers can revisit many of the issues mentioned in Step 6. Researchers want to know if respondents find any parts of the survey dull, unclear, or perplexing throughout this stage of the procedure. The pretest should be administered to the same demographic of respondents as the final survey.And, the pretest should use the same survey administration method Step 9: Prepare the final questionnaire copy and layout At this point, the researchers examine the pretest results and, if required, alter the questionnaire in light of them. The questionnaire's final design should be created with the chosen administration method in mind. The customer will be given the opportunity to approve the final questionnaire's wording and design. Step 10 : Field Questionnaire 165 CU IDOL SELF LEARNING MATERIAL (SLM)
It is now time to field the questionnaire, or carry out the survey, after the final draft and layout have been approved. The researchers employ a field service company to accomplish this. This company gives out the questionnaires to the respondents, gathers the answers, checks the answers, and then gives the researchers the raw data. 11.4 SUMMARY Questionnaires are very useful to collect demographic information, personal opinions, facts, or attitudes from respondents. One of the most significant attributes of a research form is uniform design and standardization. Every respondent sees the same questions. This helps in data collection and statistical analysis of this data. For example, the retail store evaluation questionnaire template contains questions for evaluating retail store experiences. Questions relate to purchase value, range of options for product selections, and quality of merchandise. These questions are uniform for all customers. Structured Surveys: Structured surveys gather quantitative information. The questionnaire was carefully thought out and created to collect accurate data. Additionally, it starts a formal investigation, adds information, verifies previously gathered information, and aids in the validation of any prior hypotheses. Unstructured surveys: Unstructured surveys gather qualitative information. They employ a straightforward format and a few branching questions, but nothing that restricts the answers provided by a respondent. The more open-ended questions are designed to elicit specific information from participants. Telephone Questionnaire: To obtain direct responses, a researcher will telephone a respondent. Once you have a respondent on the phone, responses come quickly. On the other hand, responders frequently reluctant to divulge much information over the phone. Additionally, it is an expensive method of doing research. Your sample may not accurately reflect the entire population as you are typically unable to get as many replies as you might with other kinds of questionnaires. 11.5 KEYWORDS Literature review :To precisely describe the construct in respect to the literature on the subject and to determine how current measures of the construct (or related 166 CU IDOL SELF LEARNING MATERIAL (SLM)
constructs) might be helpful in the construction of a new scale, the first step is to review the literature. For instance, you would want to review the literature to assist you define self-efficacy in the literature if you were creating a new scale to assess educational self-efficacy. Personal InterviewsA personal interview survey, also called as a face-to-face survey, is a survey method that is utilized when a specific target population is involved. The purpose of conducting a personal interview survey is to explore the responses of the people to gather more and deeper information. Mail SurveysAn embedded email survey, also called an email inline survey, is one of the most effective methods to conduct a survey. The survey is embedded in the email itself and the respondent can answer the questions directly in the email body. These surveys are usually short and it is recommended to not make them exhaustive. MagnitudeThe size of a measurement scale is referred to as the magnitude, and the identity is a set of numbers with a natural order of least to greatest. Typically, they are shown on the scale in either ascending or descending order. For instance, the order of finishers in a race is first, second, and third to last. 11.6 LEARNING ACTIVITY 1. Explain the scale development process ___________________________________________________________________________ ___________________________________________________________________________ 2. State the characteristics of the measurement scale ___________________________________________________________________________ ___________________________________________________________________________ 3. State the meaning of the telephone poll survey. ___________________________________________________________________________ ___________________________________________________________________________ 4. What is literature review ? ___________________________________________________________________________ ___________________________________________________________________________ 167 CU IDOL SELF LEARNING MATERIAL (SLM)
11.7 UNIT END QUESTIONS A. Descriptive Questions Short Questions: 1. List the characteristics of measurement of scale. 2. What is scale development? 3. What is questionnaire development? 4. What is measurement of scale? Long Questions: 1. Explain the various characteristics of measurement scale. 2. Explain the process of questionnaire development. 3. Explain the process of scale development B. Multiple Choice Questions 1. _________ is the process of allocating numerical values to each variable's values in a set of data. a. Identity b. Magnitude c. literature review d. all of the above 2. The size of a measurement scale is referred to as the ______ a. identity b. magnitude c. Literature review d. all of the above 3. ____ questions are like the questions used with Exploratory Research a. Open-ended b. closed-ended c. Literature 168 CU IDOL SELF LEARNING MATERIAL (SLM)
d. none of the above 4. Dichotomous questions ask the respondent to select from _____ possible answers a. two b. one c. three d. none Answers 1-a, 2-b, 3-a 4-a 11.8 REFERENCES References book Shukla, Satishprakash, (2020) Research Methodology and Statistics. Ahmedabad: Rishit Publications. Shukla, Satishprakash, (2014) Research – An Introduction (Gujarati) Ahmedabad: KshitiPrakashan Kothari C.R. : Research Methodology, New Age International, 2011. Shajahan S. : Research Methods for Management, 2004. Thanulingom N : Research Methodology, Himalaya Publishing C. Rajendar Kumar : Research Methodology , APH Publishing Kumar Ranjit: Research Methodology: A Step by Step Guide for Beginners, Sage Publication, 2014. Website https://www.questionpro.com/blog/email-surveys/ https://byjus.com/physics/hypothesis/ 169 CU IDOL SELF LEARNING MATERIAL (SLM)
UNIT - 12MEASUREMENT SCALE-II STRUCTURE 12.0Learning Objectives 12.1Introduction 12.2Reliability 12.3Validity 12.4Summary 12.5Keywords 12.6Learning Activity 12.7Unit End Questions 12.8References 12.0 LEARNING OBJECTIVES After studying this unit, you will be able to: Define reliability; Describe the various methods of calculating reliability; Explain how test retest reliability is accessed; Differentiate between tests of reliability; Define validity; Describe various methods of validity; Identify the problems that constitute threats to internal external validity; and Differentiate between internal and external validity 12.1 INTRODUCTION The goal of the majority of research is to determine the cause and effect relationship between the variables. The research's ongoing objective is to create a hypothesis that accounts for the link between variables. primarily this unitis regarding several issues that could jeopardize the accuracy and validity of the researcher's conclusions The two objectives of research design are; 1) Gather data that is pertinent to the study's objectives. 170 CU IDOL SELF LEARNING MATERIAL (SLM)
2) Gather these data with the highest level of quality and dependability How can a researcher be sure that the data gathering instrument being used will measure what it is supposed to measure and will do this in a consistent manner? This is a question that can only be answered by examining the definitions for and methods of establishing the validity and reliability of a research instrument. In every measurement, validity and reliability are key concerns. Both deal with how to link measurements to constructions. Because constructs are frequently imprecise, scattered, and not readily observable, reliability and validity are important. Perfect validity and reliability are practically impossible to attain. This section will address these two crucial components of study design. All researchers aim for the validity and reliability of their measurements. Both concepts contribute to demonstrating the veracity, reliability, or plausibility of findings. We'll talk about this unit in two sections. The definitions and notion of reliability are covered in the first section. Various techniques for determining the dependability of a research instrument for this unit are then used after that. The notion of research validity is covered in the second section of this unit. You will become familiar with the various validity categories. Lastly, a few issues that pose challenges to authenticity are discussed. 12.2 RELIABILITY The repeatability of results is referred to as reliability. Would the results of the study be the same if it were repeated? If so, the information is accurate. If more than one person is seeing a certain action or event, all of the observers must concur on the details of the observation before the data can be deemed to be trustworthy. Individual measures are subject to reliability as well. When people take a vocabulary exam again, their results should be quite comparable. If so, the test is then considered to be trustworthy. An assessment evaluating self-esteem must yield the same results when administered again to the same subject in a short period of time in order to be considered reliable. Any important results must be more than a one-off finding and be naturally repeatable, according to the reliability theory. It must be possible for other researchers to conduct the exact same experiment, under the identical circumstances, and produce the same outcomes. This will support the results and guarantee that the idea is accepted by the larger scientific community. The experiment and research have not met all testability requirements without 171 CU IDOL SELF LEARNING MATERIAL (SLM)
this replication of statistically significant results. This condition must be met for a hypothesis to become widely acknowledged as a scientific fact. According to Anastasi (1957), the reliability of test refers to the consistency of scores obtained by the individual on different occasions or with different sets of equivalent items. According to Stodola and Stordahl (1972), the reliability of a test can be defined as the correlation between two or more sets of scores of equivalent tests from the same group of individuals. Reliability is the concept that any meaningful findings ought to be repeatable. It must be possible for other researchers to conduct the exact same experiment, under the identical circumstances, and produce the same outcomes. This will support the findings and guarantee that the theory will be accepted by all researchers. The prerequisites of testability have not been fully met by experiment and research without this repetition of statistically significant results. This condition must be met for a hypothesis to become widely acknowledged as a scientific fact. For instance, if you are conducting a time-sensitive experiment, you will be employing a timer of some sort. In general, it is acceptable to believe that the instruments will maintain true and exact time and are trustworthy. To reduce the likelihood of a malfunction and maintain the validity and reliability of their data, scientists, however, take measurements repeatedly. On the other hand, any experiment that relies on human judgment will always be questioned. Human judgment is subject to variation since each observer will evaluate things differently based on the time of day and how they are feeling. This implies that such experiments are intrinsically less trustworthy and harder to repeat. In order to assess an experiment's overall validity and strengthen the conclusions, reliability is a crucial component. When an instrument is used again under identical conditions with the same individuals, it will measure consistently, which is known as reliability. It is, in essence, the measurement's repeatability. If a person's results on the same test taken twice are similar, the measurement is regarded as dependable. It's critical to keep in mind that reliability is estimated rather than measured. For instance, if a test is created to gauge a specific characteristic, like neuroticism, it should produce consistent 172 CU IDOL SELF LEARNING MATERIAL (SLM)
findings every time it is given. If a test consistently yields the same result, it is deemed dependable. Another definition of test reliability is provided. Every time we measure something, it requires a measure of some type. The typical difference between true scores and the observed score is the measurement error. However, a word error does not necessarily indicate that a mistake has been made in psychological terms. In other words, a psychological test's fault means that some measuring inaccuracy always exists. Therefore, determining the scope of such mistake and devising strategies to reduce it continue to be the fundamental objectives of psychological assessment. As it is already clear that Reliability is the degree to which an assessment tool produces stable and consistent results, there are several types of reliability; 1. Test-retest reliability It is a metric of dependability that is acquired by giving the same test to a set of people twice over time. The test's stability over time can then be assessed by correlating the results from Time 1 and Time 2. A group of students might take a test intended to measure their understanding of psychology twice, with the second administration occurring perhaps a week after the first. The obtained correlation coefficient would suggest that the scores were stable. When the same test is administered to the same sample, test-retest reliability is assessed. As a result, it refers to a test's consistency across two separate time periods and administrations. This method is predicated on the idea that when the same construct is administered on different times, there won't be any significant variations in the measurement of that construct. Critical value is placed on the time interval between measurements; the shorter the interval, the greater the correlation value, and vice versa. If the test is valid, the results from the first administration should be roughly equivalent to those from the second. It should be a really good relationship between the two administrations. Limitations of this approach There are a few limitations which include the following: (i) Memory Effect/ carry over Effect (ii) Practice effect, (iii) Absence. These are being discussed below: a. Memory effect /carry over effect: 173 CU IDOL SELF LEARNING MATERIAL (SLM)
One of the frequent issues with test-retest reliability is memory effect, also known as carry over effect. This argument is especially valid when the two administrations occur in a brief period of time. For instance, when a memory-related experiment involving nonsense syllables is conducted and the subjects are asked to remember a list in a serial-wise order, and the following experiment is conducted within 15 minutes, the subject is usually obligated to remember his or her responses, which can lead to the prevalence of artificial reliability coefficients. The conditions under which a particular experiment's pre- and post- tests are done are the same. b. Practice effect: This occurs when tests are repeated in an effort to enhance test results, as is frequently observed in the case of traditional IQ testing, where scores improve with each repetition. c. Absence People who skip tests in order to retake them. 2. Parallel forms Parallel-Forms Alternative forms reliability, equivalent form reliability, and comparable form reliability are only a few of the several names for reliability. Parallel forms dependability contrasts two tests that evaluate the same attribute in identical ways. Different elements are used in the two types. The criteria used to choose things for a specific difficulty level are the same, though. One can compare test results on one form to the other when there are two accessible. On some occasions, the same set of people receives both forms on the same day. A measure of reliability called parallel forms reliability is derived by giving the same set of people alternative versions of an assessment tool (both versions must have items that test the same construct, skill, knowledge base, etc.). To assess the consistency of results across different versions, the scores from the two versions can then be connected. A big group of questions that are all related to critical thinking could be created, and the questions could then be randomly divided into two sets to represent the parallel forms if you wanted to check the dependability of a critical thinking evaluation. As a measure of the reliability, the Pearson product moment correlation coefficient is used. The only causes of variation when both test formats are administered on the same day are random error and test format differences. The two test formats are occasionally 174 CU IDOL SELF LEARNING MATERIAL (SLM)
administered at various periods. In these circumstances, the estimation of reliability additionally accounts for temporal sampling error. The approach of parallel forms offers one of the most exacting reliability evaluations that are frequently used. Unfortunately, parallel forms are not used as frequently in practice as would be ideal. Practical limitations make it challenging to retest the same population of people, and test developers frequently find it burdensome to design two versions of the same test. However, a lot of test developers would rather base their estimation or reliability on just one type of test. In reality, psychologists don't always administer tests in two different ways. They frequently only have one test form, therefore they must determine the reliability for this specific set of items. There are numerous approaches to evaluate the various sources of variation within a single test. 3. Inter-rater reliability A reliability measure called inter-rater reliability is used to determine how closely various judges or raters agree on their assessments. Because human observers do not always interpret replies in the same manner, inter-rater reliability is important. Raters may disagree on the extent to which particular responses or pieces of information reflect understanding of the construct or skill being evaluated. Inter-rater reliability, for instance, may be used when various judges are assessing how closely art portfolios adhere to predetermined standards. When evaluations might be viewed as relatively subjective, inter-rater dependability is extremely helpful. Therefore, analyzing artwork rather than math problems would presumably be more likely to apply this type of reliability. 4. Internal consistency Reliability It is a term used to describe the consistency of results from various test items that examine the same concept. a. Average inter-item correlation Internal consistency reliability is a subtype of average inter-item correlation. It is calculated by calculating the correlation coefficient for each pair of items on a test that probes the same construct (such as reading comprehension), averaging all of these correlation coefficients, and then taking the result. The average inter-item correlation is obtained in the last stage. 175 CU IDOL SELF LEARNING MATERIAL (SLM)
b. Split-half reliability Another internal consistency reliability subtype is split-half reliability. In order to produce two \"sets\" of items, all test items that are meant to probe the same body of information (such as World War II) are \"divided in half\" to start the process of getting split-half dependability. A group of people are given the whole exam to take, the total scores for each \"set\" are computed, and then the split-half reliability is discovered by looking at the correlation between the two total scores for each \"set\". A problem with this approach is that when the tests are shorter, they run the risk of losing reliability and it can most safely be used in case of long tests only. It is, hence, more useful in case of long tests as compared to shorter ones. However to rectify the defects of shortness, Spearman- Brown’s formula can be employed, enabling correlation as if each part were full length: r = (2rhh)/(1 + rhh) (Where rhh is correlation between two halves) 12.3 VALIDITY Validity is a term used to describe how credible or believable the research is. Are the results reliable? Is hand size a reliable indicator of intelligence? The answer is almost definitely \"No, it is not.\" Is the SAT score a reliable indicator of first-year college GPA? The response is based on how much evidence there is for such a relationship. Validity covers the complete experimental idea and determines whether the outcomes satisfy all criteria set out by the scientific research process. For instance, the sample groups must have been randomly assigned, and the controls must have been distributed with the necessary care and thoroughness. Internal validity specifies the organizational principles of an experimental design and covers all phases of the scientific research process. According to Cronbach (1951) validity is the extent to which a test measures what it purports to measure. According to Freeman (1971) an index of validity shows the degree to which a test measures what it purports to measure when compared with accepted criteria. According to Anastasi (1988) the validity of a test concerns what the test measures and how well it does so 176 CU IDOL SELF LEARNING MATERIAL (SLM)
The definitions above made clear that in order to assess a test's validity, it must be compared to some ideal, independent metrics or standards. The validity coefficients are the calculated correlations between the test and an ideal criterion. Independent criteria are measurements of the trait or collection of qualities that the test itself says it is measuring outside of the test. Having in mind that Validity refers to how well a test measures what it is purported to measure, there are a number of types of validity; 1. Face Validity The measure's face validity determines whether it seems to be evaluating the study's intended construct. Stakeholders are able to quickly evaluate face validity. Although this form of validity is not particularly \"scientific,\" it might be crucial in inspiring stakeholders. The stakeholders may lose interest in the activity if they don't think the measurement accurately reflects their level of competence. As an illustration, if a scale of art appreciation is developed, each item should be relevant to the various elements and categories of art. Stakeholders may not be motivated to give their best effort or invest in this measure because they do not believe it is a true assessment of art appreciation. Face validity describes what seems to measure on the surface. It depends on the researcher's assessment. Before the researcher is confident that a question is a reliable indicator of the intended construct, each one is examined and adjusted. Face validity is determined based on the researcher's personal judgment. 2. Construct Validity It is important to confirm that the measure is measuring the concept and not other factors by checking the measure's construct validity. This form of validity can be evaluated by utilizing a panel of \"experts\" knowledgeable with the construct. The things can be examined by the experts, who can determine what each particular item is meant to measure. To get their feedback, students might be involved in this process. Example: A women's studies program may create a comprehensive evaluation of students' learning during the course of the major. The wording and phrasing used in the questions are complex. Because of this, the test can unintentionally end up being a reading comprehension test rather than a test of 177 CU IDOL SELF LEARNING MATERIAL (SLM)
women's studies. It is important that the measure is actually assessing the intended construct, rather than an extraneous factor. Compared to other validity approaches, construct validity is more sophisticated. Construct validity was defined by Mc Burney and White as a test's ability to accurately measure the constructs it was intended to evaluate. There are various methods for determining whether a test produces results with construct validity. a. The test should actually measure whatever theoretical construct it supposedly tests, and not something else. For example a test of leadership ability should not actually test extraversion. b. A test should measure what it is intended to measure and should not measure theoretically unrelated constructs if it lacks construct validity. For instance, too much reading comprehension shouldn't be required on a musical aptitude examination. c. A test should be effective at forecasting outcomes pertaining to the theoretical ideas it is measuring. For instance, a musical aptitude test should indicate which individuals will benefit from taking music classes, separate those who have chosen music as a vocation from those who haven't, and relate to other musical aptitude tests. There are two types of construct validity— ‘convergent validity’ and ‘divergent validity’ (or discriminant validity). d. Convergent Validity It means the extent to which a measure is correlated with other measure which is theoretically predicted to correlate with. e. Discriminant Validity This explains the extent to which the operationalisation is not correlated with other operationalisations that it theoretically should not be correlated with. 3. Criterion-Related Validity Test results are correlated with a different criterion of interest in criterion-related validity, which is used to predict future or current performance. For instance, suppose a physics program created a metric to evaluate cumulative student learning over the course of the major. The new measure might be associated with an objective examination of academic proficiency in this topic, like the GRE subject test or an ETS field test. The more confidence stakeholders can have in the new evaluation instrument, the higher the correlation between the existing measure and new measure should be. 178 CU IDOL SELF LEARNING MATERIAL (SLM)
A valid test should have a strong correlation with other measures of the same theoretical concept, according to the concept of criterion related validity. A valid intelligence test should have a strong correlation with other intelligence tests. A test is said to have criterion- related validity if it can successfully predict the construct using one or more indicators. The two types of criterion validity are as follows: A. Concurrent Validity When criterion measures and test results are attained at the same time, it occurs. It demonstrates how accurately the test results represent the subject's current standing in relation to the criterion. For instance, if a test accurately represents the amount of anxiety being experienced by a person right now, it would be said to have contemporaneous validity. For achievement exams and diagnostic clinical testing, concurrent evidence of test validity is typically preferred. B. Predictive Validity When criteria measures are acquired after the test, predictive validity results. Aptitude exams, for instance, can help determine who is more likely to succeed or fail in a given area. Partially relevant for entrance exams and occupational tests is predictive validity. 4. Formative Validity When employed in outcomes evaluation, formative validity evaluates a measure's capacity to give data that may be used to enhance the program being investigated. As an illustration, one may evaluate students' understanding of the entire discipline when creating a history rubric. The assessment tool is providing useful information that can be utilized to improve the course or program requirements if it can show that students lack expertise in a particular subject, like the Civil Rights Movement. 5. Internal Validity The most fundamental sort of validity is internal validity since it examines the consistency of the relationships between the independent and dependent variables. According to the measures used and the research methodology, this form of validity is an estimation of the extent to which conclusions about a causal relationship can be derived. A higher degree of internal validity is made feasible by appropriately fitted experimental approaches, where the impact of an independent variable on the dependent variable is observed under extremely controlled settings. 179 CU IDOL SELF LEARNING MATERIAL (SLM)
Threats to Internal Validity These include (i) confounding, (ii) selection bias, (iii) history, (iv) maturation, (v) repeated testing ,(vi) instrument change, (vii) regression toward the mean, (viii) mortality, (ix) diffusion, (x) compensatory rivalry, (xi) experimenter bias. a. Confounding: A confounding error that happens when it is impossible to distinguish between the effects of two different factors in an experiment, leading to a perplexed interpretation of the findings. One of the major dangers to experiment validity is confounding. Confounding is a particularly serious issue in studies where the experimenter has no control over the independent variable. Subject variable can influence the outcomes when participants are chosen based on the presence or absence of a condition. A competing hypothesis to the initial cause and inference hypotheses may be created where a misleading link cannot be avoided. b. Selection bias: Any bias in the selection of a group can compromise internal validity. Examples of selection bias include gender, personality, mental and physical capabilities, motivation level, and willingness to participate. Selection bias refers to the issue that arises as a result of the pre- test differences between groups; it may interact with the independent variable and subsequently influence the observed outcome and create problems. There could be a threat to the internal validity if, at the time of selection, a disproportionate number of the subjects to be tested share similar subject-related variables. For example, if two groups are formed, an experimental group and a control group, the subjects in the two groups differ in terms of the independent variable but are similar in at least one or more subject-related variables. It would then be challenging for the researcher to determine whether the difference between the groups is due to an independent variable, a subject-related variable, or group assignment that was random. It is not always achievable because certain important factors might go unnoticed. c. History: Events like as natural disasters, political upheavals, etc. may have an impact on participants' responses, attitudes, and behavior throughout the experiment. These events may also occur between repeated measurements of the dependent variables. In this situation, it is impossible to say whether a change in the dependent variable was brought on by a historical event or an independent variable. d. Maturity: 180 CU IDOL SELF LEARNING MATERIAL (SLM)
It frequently occurs that people change throughout an experiment or in the intervals between observations. For instance, young children may mature in longitudinal studies as a result of the experience, skills, or attitudes that are being measured. Both long-term (like physical growth) and short-term (like exhaustion and illness) changes may affect how a subject might respond to the independent variable. As a result, it may be difficult for researchers to determine whether the difference is due to time or to other variables. e. Repeated testing: Due to repeated testing, participants may get biased. Participants may remember the right answers or may develop test-taking habits as a result of frequent administration. Additionally, it raises the prospect of endangering internal authenticity. f. Instrument replacement/change: If an instrument is changed over the course of an experiment, the internal validity may be impacted as an alternate explanation. easily accessible g. Compensatory rivalry/resentful demoralisation: If the control groups change as a result of the study, compensatory rivalry or resentful demoralization will change in the subject. For instance, participants in the control group might put forth greater effort to ensure that the experimental group's anticipated superiority is not proven. Again, this does not suggest that there was no effect caused by the independent variable or that the dependent variable and independent variable did not have any relationship. Changes in the dependent variable, on the other hand, may only be brought about by a demoralized control group that is working less hard or lacking motivation. h. Experimenter bias: When experimenters behave differently from members of the control and experimental groups without intending to or out of reluctance, the experiment's outcomes are impacted. By prohibiting the experimenter from knowing the conditions or goal of the experiment and by standardizing the process as much as possible, experimental bias can be reduced. i. Mortality: It should be kept in mind that there may be some participants who may have dropped out of the study before its completion. If dropping out of participants leads to relevant bias between groups, alternative explanation is possible that account for the observed differences. 181 CU IDOL SELF LEARNING MATERIAL (SLM)
j.Diffusion: It might be observed that there will be a lack of differences between experimental and control groups if treatment effects spread from treatment groups to control groups. This, however, does not mean that, independent variable will have no effect or that there would not be a no relationship between dependent and independent variable. 6. External Validity According to McBurney and White(2007), external validity concerns whether results of the research can be generalised to another situation, different subjects, settings, times and so on. External validity lacks from the fact that experiments using human participants often employ small samples collected from a particular geographic location or with idiosyncratic features (e.g. volunteers). Because of this, it cannot be made sure that the conclusions drawn about cause-effect-relationships are actually applicable to the people in other geographic locations or in the absence of these features. Reliability vs. Validity A scientist creates a new IQ test that detects intelligence more quickly than the traditional IQ exam: The test is trustworthy but not genuine if it consistently produces scores of 135 and the candidate's actual IQ is 120. The new test is neither valid nor reliable if it produces results for a candidate of 87, 65, 143, and 102. It measures inconsistently and fails to capture what it is meant to! Scores of 100, 111, 132, and 150 also indicate low validity and reliability. The distribution of these scores is marginally superior than that shown above, though, as it includes the genuine score rather than completely omitting it. Such a test is probably plagued with severe random error. It is reasonable to conclude that the researcher's test is genuine and reliable if it consistently yields a score of 118. The validity increases as the distance from 120 decreases, and the reliability increases as the difference between repeat scores decreases. Given the consistency of the inaccuracy, a test that consistently underestimates IQ by two points can be as helpful as one that is more accurate. 182 CU IDOL SELF LEARNING MATERIAL (SLM)
12.4 SUMMARY Reliability is the quality of measurement consistency. There are numerous varieties of dependability. The consistency of psychological test results can be assessed using the Pearson product-moment correlation coefficient. Test-retest reliability is the name given to this type of reliability. By comparing the results of two equivalent forms that were given to a sizable group of diverse participants in a counterbalanced manner, alternate forms dependability is calculated. Split-half reliability, in which results from half the tests are correlated with one another, and coefficient alpha, which can be viewed as the mean of all feasible split-half coefficients, are examples of internal consistency approaches to reliability. Inter scorer dependability is necessary for examinations that require examiner discretion in order to allocate scores. It's simple to calculate inters corer reliability: A sample of tests is independently score by two or more examiners and scores for pairs of examiners are then correlated. The extent to which a test accurately evaluates the variables it is designed to. A test is reliable to the extent that conclusions drawn from it are pertinent, significant, and practical. There are different types of validity; content validity refers to how accurately a test's question, task, or items reflect the range of behavior it was intended to measure. If test users, examiners, and especially test subjects perceive a test to be valid, it has face validity. When a test is successful in predicting performance on a suitable outcome measure, it demonstrates criterion-related validity. When a cause- and-effect link between the independent and dependent variables is truly there, an investigation has internal validity. When two independent variables in an experiment cannot be examined separately, confounding occurs. The question of whether the research's findings may be applied to other circumstances—different people, places, times, etc.—is known as external validity. Events that take place outside of the lab, maturation, testing results, the effects of regression, selection, and mortality are all threats to an experiment's internal validity. Problems resulting from generalizing to different subjects, times, or situations pose a threat to external validity. By restricting the experimenter from understanding the conditions or goal of the experiment and by standardizing the technique as much as feasible, experimenter bias can be reduced. 183 CU IDOL SELF LEARNING MATERIAL (SLM)
12.5 KEYWORDS Variable: An image, perception or concept that can be measured; hence capable of taking on different values- is called a variable. A variable is also defined as anything that has a quantity or quality that varies. Validity means that correct procedures have been applied to find answers to a question. If a large plot of land has to be measured the results should be same whether we use a meter scale or a measuring tape once we put the values obtained; in the formula being used to calculate the area. Reliability refers to the quality of a measurement procedure that provides repeatability and accuracy. This is understood by the example of preparing the bill of purchase using a software which has inbuilt details of taxes and charges levied, the formulas to be used and a format in which it would be printed. This ensures that all the bills shall have values calculated as per standard set. Empirical: The processes adopted should be tested for the accuracy and each step should be coherent in progression. This means that any conclusions drawn are based upon firm data gathered from information collected from real life experiences or observations. Conceptual research is associated to some theoretical idea(s) or presupposition and is generally used by philosophers and thinkers to develop new concepts or to get a better understanding of an existing concept in practice. 12.6LEARNING ACTIVITY 1. Define validity ___________________________________________________________________________ ___________________________________________________________________________ 2. State the meaning reliability ___________________________________________________________________________ ___________________________________________________________________________ 3. list the various types of validity. ___________________________________________________________________________ ___________________________________________________________________________ 184 CU IDOL SELF LEARNING MATERIAL (SLM)
4. What is Face Validity ? ___________________________________________________________________________ ___________________________________________________________________________ 5. What is Construct Validity? ___________________________________________________________________________ ___________________________________________________________________________ 12.7 UNIT END QUESTIONS A. Descriptive Questions Short Questions: 1. What is face validity? 2. What is construct validity? 3. What is internal Validity? 4. What is Reliability? Long Questions: 1. Explain the various types of Validity. 2. Explain the various types of Reliability. 3. Explain Test-retest reliability 4. Explain the Criterion-Related Validity. C. Multiple Choice Questions 1. The repeatability of results is referred to as _____ a. reliability b. Validity c. Test-retest d. Predictive Validity 185 CU IDOL SELF LEARNING MATERIAL (SLM)
2. A ______ error that happens when it is impossible to distinguish between the effects of two different factors in an experiment. a. Confounding b. Reliability c. Construct d. Validity 3. A _______ measure called inter-rater reliability is used to determine how closely various judges or raters agree on their assessments. a. Reliability b. Confounding c. Validity d. Test-retest 4. ____ means that correct procedures have been applied to find answers to a question. a. Validity b. Reliability c. Test-retest d. Parallel forms Answers 1-a, 2-a, 3-a, 4-a 12.8 REFERENCES References book 186 CU IDOL SELF LEARNING MATERIAL (SLM)
Shukla, Satishprakash, (2020) Research Methodology and Statistics. Ahmedabad: Rishit Publications. Shukla, Satishprakash, (2014) Research – An Introduction (Gujarati) Ahmedabad: KshitiPrakashan Kothari C.R. : Research Methodology, New Age International, 2011. Shajahan S. : Research Methods for Management, 2004. Thanulingom N : Research Methodology, Himalaya Publishing C. Rajendar Kumar : Research Methodology , APH Publishing Kumar Ranjit: Research Methodology: A Step by Step Guide for Beginners, Sage Publication, 2014 . Website https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6188693/#:~:text https://research-methodology.net/research-methodology/reliability-validity-and- repeatability/research-validity/ https://conjointly.com/kb/introduction-to-validity/ 187 CU IDOL SELF LEARNING MATERIAL (SLM)
UNIT - 13MEASUREMENT SCALE-III STRUCTURE 13.0Learning Objectives 13.1Introduction 13.2Types of data analysis software 13.3Benefits of data analysis software 13.4Data analysis software features 13.5Summary 13.6Keywords 13.7Learning Activity 13.8Unit End Questions 13.9References 13.0 LEARNING OBJECTIVES After studying this unit, you will be able to: Define various types of data analysis software Understand the various benefits of data analysis software Features of data analysis software 13.1 INTRODUCTION Data analysis software is a tool used to process and manipulate information, analyze the relationship and correlation between the dataset by providing high-quality analysis like transcription analysis, discourse analysis, grounded theory methodology, and content analysis, as well as decision-making methods using the statistical and analytical capabilities. Based on these capabilities, data analysis software is classified as exploratory data analysis and confirmatory data analysis. Data analysis aids in the meaningful and symbolic content of qualitative and quantitative information by providing explanation, comprehension, or interpretation of the people and things. The two main techniques for gathering and analyzing data in an analysis are qualitative and quantitative. Since they all share the same goals, these tactics can be applied separately or concurrently. 188 CU IDOL SELF LEARNING MATERIAL (SLM)
Numerical analysis, which involves gathering data, classifying it, and then computing results using a variety of statistical techniques, is frequently related to quantitative analysis. Contrarily, qualitative analysis focuses on the examination of data that cannot be measured and is connected to the comprehension and insights of things. Tools for transcription analysis, cryptography and text interpretation, algorithmic abstraction, content analysis, and discourse analysis are provided by quantitative and qualitative research data analysis strategies. These tools help users manage large amounts of information, save time, increase flexibility, and enhance the validity and applicability of information analysis. Analysis of Data Users may handle and edit data more easily with the aid of software tools, which also make it easier to examine the connections and correlations across datasets. Tools for qualitative analysis like transcription analysis, content analysis, discourse analysis, and grounded theory approach are provided by data analysis software. The statistical and analytical capabilities of data analysis software are available for use in decision-making processes. Analysis of Data Descriptive statistics, exploratory data analysis (EDA), and confirmatory data analysis are different types of software processes (CDA). Data analysts utilize software and programs known as data analyst tools to create and carry out analytical procedures that assist businesses in making better, more informed business decisions while lowering costs and raising profits. Software for statistical, qualitative, or predictive analysis of big data sets is known as data analysis software. Universities, healthcare facilities, corporate research and development divisions, and highly technical businesses with big data sets frequently employ this software. The main user groups for these technologies are researchers, software developers, and data analysts because they frequently call for proficiency in statistical programming languages. 13.2 TYPES OF DATA ANALYSIS SOFTWARE 1. Software for statistical analysis Large data sets are processed and analyzed through intricate mathematical calculations by statistical data analysis software. These programs build statistical models from data that the statistical engineer, researcher, or data analyst loads into the software using statistical programming languages like R or Python. 189 CU IDOL SELF LEARNING MATERIAL (SLM)
These tools are frequently employed in the industrial and manufacturing sectors, which must identify and enhance their processes and refine their materials to remain competitive, while being originally developed for extremely sophisticated academic, medical, or scientific research goals. 2. Qualitative data analysis software In order to help academics, researchers, and entrepreneurs comprehend trends and statistical data points from non-numerical inputs including text, video, audio, and images, qualitative data analysis software leverages analogous mathematical principles. To assist users in understanding their data, the tools also make use of statistical analysis and machine learning. Software for qualitative data analysis frequently enables analysts to recognize various forms of information including non-text data, which they can later train the program to recognize automatically. These tools increase the amount of data that a single analyst can process, accelerating research and the time it takes to act on data. There are several types of Data Analysis Software that exist based on business and technology. The major Data Analysis Software are: 1. NVivo NVivo is employed for data analysis It is a program that supports both mixed-strategy and qualitative analysis. Additionally, it aids users in organizing, analyzing, and discovering insights in qualitative data from sources like interviews, open-ended survey responses, publications, social media, and websites. 1. Investigate and classify unstructured text, audio, video, or image data. 2. Improve your internal workflow and coverage procedures while managing your information with ease. 3. Deliver high-quality results supported by a transparent discovery and analysis process. 4. Reduce project timelines and boost productivity. 5. You can import articles into NVivo and generate transcripts using the reference management code. 2. Transana Transana is free software that can be used for both transcription and analysis of transmission data. The qualitative data analysis of still images, audio, and video can be done in a variety of ways using Transana. 190 CU IDOL SELF LEARNING MATERIAL (SLM)
The graphical and text-based reports offered by Transana are incredibly flexible and scalable. Users can investigate analytic linkages in their data and retain their insights in a somewhat novel approach. Multiple researchers can work on the same set of data simultaneously with Transana Multi-User, even from places that are miles apart. Investigate coded data using text and graphic reports. Transana allows you to use cryptographic forms to encode still photos, including video information snapshots. 3. MAXQDA For qualitative, quantitative, and mixed data analysis methodologies, MAXQDA is a capable piece of software. It gives researchers strong, cutting-edge, and easy-to-use analytical tools that support the success of a research effort. You can get help from MAQDA with the methodical analysis and interpretation of your data. Professionals use it to examine a wide range of data, including interviews, reports, tables, focus groups, online surveys, films, audio files, literary works, and a lot more. Three product variants are available for MAXQDA: MAXQDA Standard, MAXQDA Plus, and MAXQDA Analytics Pro. MAXQDA Plus is a module for quantitative text analysis, MAXQDA Standard is a strategy for qualitative and mixed methods data analysis, and MAXQDA Analytics Pro is a module for statistical analysis. With MAXQDA, you can quickly recognize various focus group speakers, evaluate each speaker individually, compare their contributions, and visualize them in various ways. 4. Qiqqa Qiqqa is a cutting-edge data analysis tool that is popular and used by corporations, researchers, and academia. You may search for, view, and annotate PDFs using this essential free research and reference manager. All of a user's PDFs are kept safe with Qiqqa, which also makes them instantly available and searchable on all of the user's devices. While scanning PDFs in Qiqqa, it assists users in capturing all of their tags, comments, highlights, and annotations. 191 CU IDOL SELF LEARNING MATERIAL (SLM)
Users can rapidly review, summarize, and create bibliographies for their work. Customers can use Qiqqa to follow quotations, authors, and keywords to find what to scan next. 5. ATLAS.ti For those who like to see the big picture while also appreciating the minutiae, ATLAS.ti is the industry-leading software for qualitative data analysis. It reveals linkages and meanings, enabling users to contextualize their conclusions inside the data. users to obtain deep insights using the most powerful and intuitive QDA code. In a wide range of disciplines, including social science, the arts, design, communication, science, economics, psychology, and sociology, it is employed by academics and practitioners. It offers analytical and visualization tools meant to reveal fresh, illuminating perspectives on the fabric. New benchmarks for computer-assisted qualitative information analysis will soon be set by ti Eight Windows. In order to compete with your rivals and give your firm an edge, you need to be able to prospect and clean massive amounts of data. For this, you need the right data analysis tools. Dedoose, online QA, Annotations, and many more such Data Analysis software products are readily available on the market. Depending on your requirements, level of development, level of revenue, and cost of your organization, you can choose the best data analysis tool. 13.3 BENEFITS OF DATA ANALYSIS SOFTWARE 1. Big data ingestion Data lakes or data warehouses are put to use for the business through data analysis tools. Businesses that generate a lot of data via routine operations might use that data to improve business choices. These tools are made to retrieve enormous data sets before calculations and analyses are performed. Software for data analytics differs from standard business intelligence or data visualization software in how quickly it can absorb and make sense of vast data repositories. 2. Tools for ML and AI A single corporation or organization's processing capacity is greatly increased by data analysis software, which is designed around intricate statistical analysis and mathematical 192 CU IDOL SELF LEARNING MATERIAL (SLM)
formulae. These technologies frequently work well with artificial intelligence (AI) and machine learning (ML) algorithms, which in turn lessen the need for manual oversight of the data analysis. Organizations now have the freedom to facilitate data analysis rather than becoming bogged down in manual computations thanks to ML and AI features. 3. Complex statistical analysis The ability of data analysis solutions to apply sophisticated statistical algorithms to substantial data sets is its key advantage. The program includes all necessary pre- programmed formulas, including those for cluster analysis, automatic linear models, Monte Carlo simulation, and ordinary least squares regression. Additionally, a lot of data analysis software packages allow analysts to program their own formulas, allowing for personalized results. 13.4 DATA ANALYSIS SOFTWARE FEATURES 1. Data processing Tools for loading, purging, and preparing data for analysis should be included in data analysis software. Before leveraging the computer power to provide insights, these technologies make sure that the data is in the proper format. Some data analytics tools can also process text data that is missing, imbalanced, or doesn't come in a conventional format. 2. Visualizations All data analysis tools should include some sort of data visualization, and many of them offer sophisticated visuals that help analysts convey their findings more effectively. These features enable the inclusion of visualizations in static or browser-based digital reports. However, because there are many different alternatives for data analysis software, teams should be careful to select a program that has charts or graphs that are appropriate for their data. 3. Data coding A data coding feature is frequently included in qualitative analysis tools, allowing the researcher or analyst to give names to the many sorts of data they discover in their databases. As a result, text, audio, and video data are more quickly and easily searchable. With the use of these tools, analysts may easily and rapidly define their codes and code data. 4. Scripting 193 CU IDOL SELF LEARNING MATERIAL (SLM)
Individual analysts can further customize the processing, presentation, and computation of data using scripting tools. Scripting tools assist analysts in formula definition, data set narrowing, and detailed analysis that precisely matches the data sets and required outputs. 5. Statistic evaluation Advanced statistical formulas and data visualizations are provided by statistical analysis tools to assist organizations in processing and comprehending massive amounts of data. These technologies are frequently used to support the development of predictive models that provide businesses with a range of results for identifying best practices and guiding future decisions. Business intelligence and predictive analytics software may contain some of these characteristics, but statistical analysis software will offer more sophisticated functionality. 6. ML and AI At their most fundamental levels, artificial intelligence and machine learning are just ways to comprehend data using mathematical and statistical algorithms. Analysts can create algorithms that ingest and \"learn from\" the data they input to data analysis tools using the AI and ML technologies available in these tools. While not all data analysis software solutions will have AI or ML capabilities, many of the algorithms that these tools offer are a natural outgrowth of these characteristics. 7. Multimedia analysis The two main categories of data analysis tools are statistical analysis and qualitative analysis tools. While qualitative analysis tools can ingest and analyze unstructured data from text, photos, video, and audio, statistical analysis tools deal largely on numerical data. Some statistical analysis programs also support non-numerical data, such as binary and Unicode. 13.5 SUMMARY Data analysis software is a tool used to process and manipulate information, analyze the relationship and correlation between the dataset by providing high-quality analysis like transcription analysis, discourse analysis, grounded theory methodology, and content analysis, as well as decision-making methods using the statistical and analytical capabilities. Based on these capabilities, data analysis software is classified as exploratory data analysis and confirmatory data analysis. Data analysis aids in the meaningful and symbolic content of qualitative and quantitative information by providing explanation, comprehension, or interpretation 194 CU IDOL SELF LEARNING MATERIAL (SLM)
of the people and things. The two main techniques for gathering and analyzing data in an analysis are qualitative and quantitative. Since they all share the same goals, these tactics can be applied separately or concurrently. Software for qualitative data analysis frequently enables analysts to recognize various forms of information including non-text data, which they can later train the program to recognize automatically. These tools increase the amount of data that a single analyst can process, accelerating research and the time it takes to act on data. The ability of data analysis solutions to apply sophisticated statistical algorithms to substantial data sets is its key advantage. The program includes all necessary pre- programmed formulas, including those for cluster analysis, automatic linear models, Monte Carlo simulation, and ordinary least squares regression. Additionally, a lot of data analysis software packages allow analysts to program their own formulas, allowing for personalized results. 13.6 KEYWORDS NVivoNVivo is employed for data analysis It is a program that supports both mixed- strategy and qualitative analysis. Additionally, it aids users in organizing, analyzing, and discovering insights in qualitative data from sources like interviews, open-ended survey responses, publications, social media, and websites. . VisualizationsAll data analysis tools should include some sort of data visualization, and many of them offer sophisticated visuals that help analysts convey their findings more effectively. These features enable the inclusion of visualizations in static or browser-based digital reports. However, because there are many different alternatives for data analysis software, teams should be careful to select a program that has charts or graphs that are appropriate for their data. Data codingA data coding feature is frequently included in qualitative analysis tools, allowing the researcher or analyst to give names to the many sorts of data they discover in their databases. As a result, text, audio, and video data are more quickly and easily searchable. With the use of these tools, analysts may easily and rapidly define their codes and code data. 195 CU IDOL SELF LEARNING MATERIAL (SLM)
artificial intelligenceWith support from artificial intelligence and other data- collecting tools, people can use their computer or mobile device to accurately identify a species. Artificial intelligence systems can also serve as a real-time discovery tool for invasive or endangered species 13.7LEARNING ACTIVITY 1. Define data analysis software ___________________________________________________________________________ ___________________________________________________________________________ 2. State the features of data analysis software ___________________________________________________________________________ ___________________________________________________________________________ 3. State the benefits of data analysis software. ___________________________________________________________________________ __________________________________________________________________ 13.8 UNIT END QUESTIONS 196 A. Descriptive Questions Short Questions: 1. What is artificial intelligence? 2. List the types of data analysis software. 3. What is Qualitative data analysis software? 4. What is data processing? Long Questions: 1. Explain the various types of data analysis software. 2. Explain in detail about the features of data analysis software. 3. What are the various benefits of data analysis software? 4. What is Qualitative data analysis software? B. Multiple Choice Questions CU IDOL SELF LEARNING MATERIAL (SLM)
1. _________ is employed for data analysis It is a program that supports both mixed-strategy and qualitative analysis. a. NVivo b. Transana c. MAXQDA d. all of the above 2_________ is the industry-leading software for qualitative data analysis a. . ATLAS.ti b. Transana c. NVivo d. MAXQDA 3. ________ is a tool used to process and manipulate information, analyze the relationship a. Data analysis software b. NVivo c. MAXQDA d. ATLAS.ti 4. A ______ feature is frequently included in qualitative analysis tools a. data coding b. software c. MAXQDA d. ATLAS Answers 1-a, 2-a, 3-A 4-a 197 13.9 REFERENCES References book CU IDOL SELF LEARNING MATERIAL (SLM)
Shukla, Satishprakash, (2020) Research Methodology and Statistics. Ahmedabad: Rishit Publications. Shukla, Satishprakash, (2014) Research – An Introduction (Gujarati) Ahmedabad: KshitiPrakashan Kothari C.R. : Research Methodology, New Age International, 2011. Shajahan S. : Research Methods for Management, 2004. Thanulingom N : Research Methodology, Himalaya Publishing C. Rajendar Kumar : Research Methodology , APH Publishing Kumar Ranjit: Research Methodology: A Step by Step Guide for Beginners, Sage Publication, 2014 Website https://monkeylearn.com/blog/qualitative-data-analysis-software/ https://www.predictiveanalyticstoday.com/top-qualitative-data-analysis-software/ https://guides.nyu.edu/quant/statsoft 198 CU IDOL SELF LEARNING MATERIAL (SLM)
UNIT - 14MEASUREMENT SCALE-IV STRUCTURE 14.0Learning Objectives 14.1 Introduction 14.2IBM-SPSS 14.2.1 features of SPSS 14.2.2 Statistical methods of SPSS 14.2.3 Types of SPSS 14.3 IBM-AMOS 14.4Reference Management software 14.5Software for detection of plagiarism 14.6Summary 14.7Keywords 14.8Learning Activity 14.9Unit End Questions 14.10.References 14.0 LEARNING OBJECTIVES After studying this unit, you will be able to: Understand the various software related to plagiarism Define the various reference management software Various type of SPSS Understand the IBM-AMOS 14.1 INTRODUCTION Research may be very broadly defined as systematic gathering of data and information and its analysis for advancement of knowledge in any subject. Research attempts to find answer intellectual and practical questions through application of systematic methods. Webster’s Collegiate Dictionary defines research as \"studious inquiry or examination; esp: investigation or experimentation aimed at the discovery and interpretation of facts, revision of accepted theories or laws in the light of new facts, or practical application of such new or revised 199 CU IDOL SELF LEARNING MATERIAL (SLM)
theories or laws\". Some people consider research as a movement, a movement from the known to the unknown. It is actually a voyage of discovery. We all possess the vital instinct of inquisitiveness for, when the unknown confronts us, we wonder and our inquisitiveness makes us probe and attain full and fuller understanding of the unknown. This inquisitiveness is the mother of all knowledge and the method, which man employs for obtaining the knowledge of whatever the unknown, can be termed as research. Research is an academic activity and as such the term should be used in a technical sense. According to Clifford Woody research comprises defining and redefining problems, formulating hypothesis or suggested solutions; collecting, organizing and evaluating data; making deductions and reaching conclusions; and at last carefully testing the conclusions to determine whether they fit the formulating hypothesis. D. Steiner and M. Stephenson in the Encyclopedia of Social Sciences define research as “the manipulation of things, concepts or symbols for the purpose of generalizing to extend, correct or verify knowledge, whether that knowledge aids in construction of theory or in the practice of an art.” Research is, thus, an original contribution to the existing stock of knowledge making for its advancement. It is the pursuit of truth with the help of study, observation, comparison and experiment. In short, the search for knowledge through objective and systematic method of finding solution to a problem is research. The systematic approach concerning generalization and the formulation of a theory is also research. As such the term ‘research’ refers to the systematic method consisting of enunciating the problem, formulating a hypothesis, collecting the facts or data, analyzing the facts and reaching certain conclusions either in the form of solutions(s) towards the concerned problem or in certain generalizations for some theoretical formulation. 14.2IBM- SPSS \"Statistical Package for the Social Sciences\" is what SPSS stands for. It is a tool from IBM. In 1968, this tool was initially introduced. There is only one software set here. The primary purpose of this software is data statistical analysis. SPSS is primarily utilized by market researchers, health researchers, survey firms, education researchers, government, marketing organizations, data miners, and many others in the fields of healthcare, marketing, and educational research. It offers data analysis for group identification, numerical outcome forecasting, and descriptive statistics. To manage data effectively, this software also offers tools for data processing, charting, and direct marketing. 200 CU IDOL SELF LEARNING MATERIAL (SLM)
Search
Read the Text Version
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
- 166
- 167
- 168
- 169
- 170
- 171
- 172
- 173
- 174
- 175
- 176
- 177
- 178
- 179
- 180
- 181
- 182
- 183
- 184
- 185
- 186
- 187
- 188
- 189
- 190
- 191
- 192
- 193
- 194
- 195
- 196
- 197
- 198
- 199
- 200
- 201
- 202
- 203
- 204
- 205
- 206
- 207
- 208
- 209
- 210
- 211
- 212
- 213
- 214
- 215
- 216