Important Announcement
PubHTML5 Scheduled Server Maintenance on (GMT) Sunday, June 26th, 2:00 am - 8:00 am.
PubHTML5 site will be inoperative during the times indicated!

Home Explore portfolio cover_merged

portfolio cover_merged

Published by Advitya Singh, 2023-06-04 14:24:37

Description: portfolio cover_merged

Search

Read the Text Version

e Dashboard dynamically represent the IMDB data. The DASHBOARD

Tech- MICROSOFT EXCEL 2019 was used to carry out the anaylsis of the proje • Data cleaning tools • Pivot tables • VLOOKUP, INDEX and Offset function • Use of slicers and tools • Pivot charts and visualisations charts • Interactive Dashboards In 1. Key insights or findings discovered during the project are given a (a) DATA CLEANING PROCESS: After studying the data the (i) Blank cells. A number of blank cells were foun same were removed by shortcut key F5- GoTo-special -B errors in future analysis and pivot table formations. (ii) Dropping Columns. The columns and data not pivot tables thereby reducing long list of pivot table field (iii) Splitting column data. The data given in co to Columns with Multiple Delimiters using TEXTSPLIT in (b) HIGHEST PROFIT MOVIES - The analysis clearly indicate outliers clearly. To work out the top profit generating movies, Top and bar chart.

-Stack Used ect. The following tools were extensively used during the process:- nsights as under :- following process and tools in excel were used to clean the data :- nd in various columns like – color, actor name, user reviews, gross, plot , etc. The Blanks; after highlighting blanks ctrl(-), deleted all blank rows. This would lead to required for analysis were dropped to reduce the data load during creation of list. olumn name “Genres” had multiple data separated by special character. Split Text Excel Formula was carried out for ease of analysis. ed movies with highest profit. The same was shown by scatter plot indicting the p 10 movies were sorted out based on highest profit, it was shown in tabular form

(c) TOP 250 MOVIES - The analysis clearly indicated that than English language were are separately listed. (d) TOP 10 DIRECTORS - with the derived data top 10 direct sorting and indexing were used. (e) POPULAR GENRES - with the derived data POPULAR GEN table, filter, sorting and indexing were used. (f) CRITIC & USER FAVORITE - with the derived data the us (g) NUM USERS BY DECADE - with the derived data Critic and VLOOKUP functions were used for desired results. (h) INTERACTIVE DASHBOARD- An interactive dashboard wa Result The result of the project can be referred to the following documents:- 1. Loom Video Link - https://www.loom.com/share/5d149cf7ba9443fc8 2. Microsoft Excel Analysis file - https://docs.google.com/spreadsheets/ 2ym8/edit?usp=sharing&ouid=113906708419831349002&rtpof=true&s

TOP 250 all language movies based on IMDB rankings. Also Foreign movies other tors on the basis of mean imdb score was depicted. Functions like pivot table, filter, NRES on the basis of HIGHEST COUNT OF GENRES was depicted. Functions like pivot se of ARRAY FUNCTIONS in excel are practised. c Fav & Num Fav Actors and Num Users by Decade the pivot table advance sorting as created in excel to dynamically represent the IMDB data. 8dc8ff4b0c9d9831 /d/1-iye0NfrH2FYPyhCIKXgoMLhnt5- sd=true





PROJECT DE This project carries out the analysis to ascertain the th customers. Two types of risks are associated with the 1. If the applicant is likely to repay the loan, then business to the company. 2. If the applicant is not likely to repay the loan, i. loan may lead to a financial loss for the company The dataset with three worksheets were provided as u 1.`application_data.csv` contains all the informati The data is about whether a client has payment d 2.`previous_application.csv` contains information data whether the previous application had been A 3.`columns_descrption.csv` is data dictionary wh

ESCRIPTION he risk involved for banks in offering the loan to e bank’s loan customers: not approving the loan results in a loss of .e., he/she is likely to default, then approving the y. under :- ion of the client at the time of application. difficulties. n about the client’s previous loan data. It contains the Approved, Cancelled, Refused or Unused offer. hich describes the meaning of the variables.

PROJECT DE The column description datsheet provided information application datasheet provided the following info :- 1. The client with payment difficulties: he/she had the first Y instalments of the loan in our sample. 2. All other cases: All other cases when the paym Based on the scenarios a detailed analysis must be co be used for taking actions such as denying the loan, r applicants) at a higher interest rate, etc. This will ensu are not rejected.

ESCRIPTION n about the column contents and information. The d late payment more than X days on at least one of ment is paid on time. onducted to help bank identify the pattern which may reducing the amount of loan, lending (too risky ure that the consumers capable of repaying the loan

METHODOLOG THE FILES `APPLICATION_DATA.CSV` AND `PRE UNMATCHING ROWS AND COLUMNS TO ASCERT PROCEDURE :- 1. =COUNTA : I have used COUNTA function 2. PERCENTAGE OF NULL VALUES : After th each column using the formula 1- (Total Row Counts 3. REMOVAL OF NULL VALUES : After that I ha percentages more than 30%. For column having less mean, median and mode imputations for the missing percentages less than 30%. 4. OUTLINERS : I have also found the outliers us relevant columns. 5. INSIGHTS RELEVENT COLUMNS: After going only relevant columns to bring out the insights. 6. CONVERSION NO OF DAYS TO YEAR: The c simply dividing the days by 365.

GY OF ANAYSIS EVIOUS_APPLICATION.CSV HAD TAIN THE SAME I USED THE FOLLOWING n to count the total rows in each column. hat I have found the percentage of null values in s for each columns / Total Row Counts). ave removed all the columns having null value s than 30% null value percentages I have done g values for columns having null value sing interquartile range method considering g through each column description, I have kept columns having days are converted in to years by

METHODOLOG FOLLOWING ADDITIONAL INFORMATION WAS P 1. DATA IMBALANCE 2. DATA IMBALANCE

GY OF ANAYSIS PROVIDED AS REQUIRED :-

METHODOLOG FOLLOWING ADDITIONAL INFORMATION WAS P 3. UNIVARIAITE ANALYSIS 4. UNIVARIAITE SEGMENTED 5. BIVARIAITE ANALYSIS

GY OF ANAYSIS PROVIDED AS REQUIRED :-

METHODOLOG 5. CORRELATIONS FOR APPLICANTS WITH PA

GY OF ANAYSIS AYMENT MADE ON TIME

TECH STAC TECH-STACK - I USED MICROSOFT EXCEL 201 LOAN CASESTUDY. PURPOSE - MICROSOFT EXCEL IS USED TO, REPRESENTATION OF THE RESULTS. ALSO E COMPARISION CARRIED OUT FOR BETTER AN FOLLWING ADVANCED TOOLS ARE USED FRO ✓ Univariate ✓ segmented univariate ✓ bivariate analysis ✓ correlation

CKS USED 19 TO CARRY OUT THE EDA FOR THE BANK CLEAN , ORGANISE AND CREATE GRAPHICAL EXCEL IS AN EFFECTIVE TOLL TO UNDERSTAND NAYSIS. OM EXCEL :

SUMM THIS PROJECT HAS HELPED IN UNDERSTAND 1. USE OF ADVANCED EXCEL TOOLS. 2. HANDLING MULTIPLE WORKSHEETS WITH 3. SELECTION AND EXTRACTION OF ONLY U 4. FINDING CORELATION BETWEEN THE CU 5. ENABLE BANKS TAKE DECISION FOR DIS

MARY DING THE FOLLOWING :- H HUGE DATASETS. USEFUL DATA. USTOMER INFORMATION. SBURSEMENT OF LOAN.

PROJECT: ANALYZING THE IMPACT OF CAR FEATURES ON PRICE AND PROFITABILITY 1. Project Description: The automotive industry has been rapidly evolving over the past few decades, with a growing focus on fuel efficiency, environmental sustainability, and technological innovation. With increasing competition among manufacturers and a changing consumer landscape, it has become more important than ever to understand the factors that drive consumer demand for cars. In recent years, there has been a growing trend towards electric and hybrid vehicles and increased interest in alternative fuel sources such as hydrogen and natural gas. At the same time, traditional gasoline- powered cars remain dominant in the market, with varying fuel types and grades available to consumers. For the given dataset, as a Data Analyst, the client has asked How can a car manufacturer optimize pricing and product development decisions to maximize profitability while meeting consumer demand? (a) Problem Statement and Aim This problem could be approached by analyzing the relationship between a car's features, market category, and pricing, and identifying which features and categories are most popular among consumers and most profitable for the manufacturer. By using data analysis techniques such as regression analysis and market segmentation, the manufacturer could develop a pricing strategy that balances consumer demand with profitability, and identify which product features to focus on in future product development efforts. This could help the manufacturer improve its competitiveness in the market and increase its profitability over time. (b) Description of the data sources used in the project. The dataset contains information on various car models and their specifications, and is titled \"Car Data\". It was collected and made available on Final Project-3 : Analyzing the Impact of Car Features on Price and Profitability. the brief overview of the dataset is as under: (i) Number of observations: 11,159 (ii) Number of variables: 16 (iii) File type: CSV (Comma Separated Values) This dataset could be useful for a variety of data analysis tasks, such as: (i) Exploring trends in car features and pricing over time (ii) Comparing the fuel efficiency of different types of cars (iii) Investigating the relationship between a car's features and its popularity (iv) Predicting the price of a car based on its features and market category

(e) Description of the data cleaning and pre-processing steps performed on the data. After studying the data the following process and tools in excel were used to clean the data :- (i) Car’s Model Numbers. Changed the type of data to text from ‘number format’ option under ‘Home’ tab as this column contains both number and text. This would lead to errors in future analysis and pivot table formations. (ii) Wrong model numbers. Few model numbers that are understood as date format in excel like 9-x3, 9-x4 and 9-x5 were shown as 09 mar/apr/may respectively. These were changed to respective model numbers by the help of find and replace function. (iii) Remove brackets and not required data. The data given in column no 3 – Engine fuel type contained data in brackets which was not useful. It was removed with the help of Find and replace function by providing find (*) and replace as blank space. All data in brackets was removed. (iv) Removing N/A data. The NA data field were removed to carry out better corelation and analysis. (v) Removing blank rows. Blank rows were removed by pressing F5 - Go To window - Special- GoTO special- blanks-ok-ctrl Minus. (f) Any assumptions made during the project. The following assumptions were made during the project: - (i) Market Category. The market category data contains various sub category features separated by comas. For better analysis the first word in the category is taken as the base market category. (ii) Driven_Wheels. This column contains the wheel drive type of the cars. For understanding the feature is abbreviated as FWD, AWD, 4WD, RWD etc. 2. Approach: the analytical methods adopted for carrying out each analysis tasks given in the product is discussed in the succeeding paras :- TASK ANALYSIS Insight Required: How does the popularity of a car model vary across different market categories? • Task 1.A: Create a pivot table that shows the number of car models in each market category and their corresponding popularity scores. • Task 1.B: Create a combo chart that visualizes the relationship between market category and popularity.

6000 Acura 5000 Alfa Romeo 4000 Aston Martin 3000 Audi 2000 Bentley 1000 BMW Bugatti 0 Buick Cadillac Chevrolet Chrysler ANALYSIS OF TASK 1 A & B: Popularity of a car model vary across different market categories was analysed by creating the pivot table and combo chart to visualise the relationship between market category and popularity scores. It was found that the brand Aston martin has the maximum popularity score across the brands and all market category. Insight Required: Relationship between a car's engine power and its price. • Task 2: Create a scatter chart that plots engine power on the x-axis and price on the y-axis. Add a trendline to the chart to visualize the relationship between these variables. 2500000 Chart Title 2000000 1500000 MSRP 1000000 500000 0 0 200 400 600 800 1000 1200 ENGINE HP ANALYSIS OF TASK 2 : The above chart reflects the relationship between the engine horse power and price of car. This reflects that the price rises exponentially with the rise in Engine HP. The same is depicted by a trendline.

Insight Required: Most important car features in determining a car's price • Task 3: Use regression analysis to identify the variables that have the strongest relationship with a car's price. Then create a bar chart that shows the coefficient values for each variable to visualize their relative importance. ANALYSIS OF TASK 3 : The regression analysis to identify the variables that have the strongest relationship with a car's price was carried out and found that the Number of Engine cylinders has the strongest relationship in determining the car’s price. The same is depicted by a bar chart too. Insight Required: The average price of a car vary across different manufacturers. • Task 4.A: Create a pivot table that shows the average price of cars for each manufacturer. • Task 4.B: Create a bar chart or a horizontal stacked bar chart that visualizes the relationship between manufacturer and average price. ANALYSIS OF TASK 4 A & B : An analysis of the average price of a car across different manufacturers was carried out through pivot table. The data derived was plotted on a horizontal stacked bar chart that visualizes the relationship between manufacturer and average price. It was found that the brand BUGATTI has the highest Avg MSRP among other manufacturers.

Insight Required: Relationship between fuel efficiency and the number of cylinders in a car's engine. • Task 5.A: Create a scatter plot with the number of cylinders on the x-axis and highway MPG on the y-axis. Then create a trendline on the scatter plot to visually estimate the slope of the relationship and assess its significance. MPG Relationship between fuel efficiency and the number of 160 cylinders in a car's engine 140 120 100 80 60 40 20 0 0 2 4 6 8 10 12 14 16 18 NUMBER OF CYLINDERS • Task 5.B: Calculate the correlation coefficient between the number of cylinders and highway MPG to quantify the strength and direction of the relationship. CORRELATION BY DATA ANALYSIS TOOL Engine Cylinders city mpg REMARKS Engine Cylinders 1 city mpg -0.749330492 1 NEGATIVE CORRELATION BY FORMULA -0.749330492 NEGATIVE ANALYSIS OF TASK 5A & 5B : An analysis of Relationship between fuel efficiency and the number of cylinders in a car's engine was carried out and found the following :- (a) Scatter plot - Trendline on the scatter plot to visually estimate the slope of the relationship signifies that there is a declining or negative relationship between number of cylinders and MPG. More number of cylinders more the fuel consumption. (b) Correlation Coefficient - By Data analysis tool correlation was carried out between the relationship between number of cylinders and MPG. It turned out to be Negative. Hence predicting the same result.

Building the Dashboard Task 1: How does the distribution of car prices vary by brand and body style? • Hints: Stacked column chart to show the distribution of car prices by brand and body style. Use filters and slicers to make the chart interactive. Calculate the total MSRP for each brand and body style using SUMIF or Pivot Tables. Task 2: Which car brands have the highest and lowest average MSRPs, and how does this vary by body style? • Hints: Clustered column chart to compare the average MSRPs across different car brands and body styles. Calculate the average MSRP for each brand and body style using AVERAGEIF or Pivot Tables.

Task 3: How do the different feature such as transmission type affect the MSRP, and how does this vary by body style? • Hints: Scatter plot chart to visualize the relationship between MSRP and transmission type, with different symbols for each body style. Calculate the average MSRP for each combination of transmission type and body style using AVERAGEIFS or Pivot Tables. Task 4: How does the fuel efficiency of cars vary across different body styles and model years? • Hints: Line chart to show the trend of fuel efficiency (MPG) over time for each body style. Calculate the average MPG for each combination of body style and model year using AVERAGEIFS or Pivot Tables.

Task 5: How does the car's horsepower, MPG, and price vary across different Brands? • Hints: Bubble chart to visualize the relationship between horsepower, MPG, and price across different car brands. Assign different colors to each brand and label the bubbles with the car model name. Calculate the average horsepower, MPG, and MSRP for each car brand using AVERAGEIFS or Pivot Tables. Tech-Stack Used MICROSOFT EXCEL 2019 was used to carry out the anaylsis of the project. The following tools were extensively used during the process:- ❖ Data cleaning tools ❖ Regression analysis ❖ Correlation coefficient ❖ Pivot tables ❖ Offset function for setting up images in the slicers ❖ Use of slicers and tools ❖ Pivot charts ❖ Interactive Dashboards ❖ Link functions on the dashboard

Insight Key insights or findings discovered during the project are given as under :- ❖ Car popularity and MSRP : Popularity of a car model vary across different market categories was analysed by creating the pivot table and combo chart to visualise the relationship between market category and popularity scores. It was found that the brand Aston martin has the maximum popularity score across the brands and all market category. ❖ Car HP and Price: The relationship between the engine horse power and price of car. This reflects that the price rises exponentially with the rise in Engine HP. The same is depicted by a trendline. ❖ Regression Analysis of No of Cylinders and Cars Price: The regression analysis to identify the variables that have the strongest relationship with a car's price was carried out and found that the Number of Engine cylinders has the strongest relationship in determining the car’s price. The same is depicted by a bar chart too. ❖ Average MSRP of various Brands : An analysis of the average price of a car across different manufacturers was carried out through pivot table. The data derived was plotted on a horizontal stacked bar chart that visualizes the relationship between manufacturer and average price. It was found that the brand BUGATTI has the highest Avg MSRP among other manufacturers. ❖ Fuel Efficiency Analysis : An analysis of Relationship between fuel efficiency and the number of cylinders in a car's engine was carried out using the Scatter plot and Correlation coefficient and found that it is turned out to be negative. Hence ¸ More number of cylinders more the fuel consumption. Result The result of the project can be referred to the following documents:- 1. Car_data analysis by Advitya.xlsx https://docs.google.com/spreadsheets/d/1_nXhNfX- zSmFVuA0ZBikJWcoJlWRUuz6/edit?usp=share_link&ouid=113906708419831349002&rtpof=true&s d=true 2.DASHBOARD CAR DATA ANALYSIS BY ADVITYA.xlsx https://docs.google.com/spreadsheets/d/1wAKpwxSQ58vXSqTr2VT4Q_evu5dQ2JqM/edit?usp=shar e_link&ouid=113906708419831349002&rtpof=true&sd=true

1. Project Description: The dataset of a Customer Experience (CX) Inbound calling team for 2 centre which is responsible for handling inbound calls of customers. In prospective customers for your business which are attended by custome (a) Problem Statement and Aim This problem could be approached by analyzing the call centre d (i) Call volume analysis per day at different time inte (ii) Number of calls attended by an agent/ IVR/ aband (iii) Manpower assessment for heavy call traffic period (iv) correlation analysis of various time shifts and num By using data analysis techniques, the company could develop a number of calls from customers and improve the customer supp (b) Description of the data sources used in the The dataset contains information of inbound calling team of co Agent_Name, Agent_ID, Queue_Time [duration for which custom which call was made by customer in a day], Time_Bucket [for [duration for which a customer and executives are on call, Call_S call status (Abandon, answered, transferred).It was collected an ANAYSIS. The brief overview of the dataset is as under: (i) Number of observations: 14,85,980 (ii) Number of variables: 13 (iii) File type: CSV (Comma Separated Values) This dataset could be useful for a variety of data analysis tasks, s (i) Call volume analysis per day at different time inte

23 days was provided. Inbound customer support is defined as the call nbound calls are the incoming voice calls of the existing customers or er care representatives. data and generate insights wrt the following aspects :- ervals. doned. d and to reduce the abandon rate. mber of agents requirement. a strategy that enhances the call centre capability in attending maximum port architecture. e project. ompany ABC, and is titled \"Call_volume_trend_analysis\". Data includes mer have to wait before they get connected to an agent], Time [time at easiness we have also provided you with the time bucket], Duration Seconds [for simplicity we have also converted those time into seconds], nd made available on Final Project-4 : ABC CALL VOLUME TREND such as: ervals.

(ii) Number of calls attended by an agent/ IVR/ aband (iii) Manpower assessment for heavy call traffic period (iv) correlation analysis of various time shifts and num (e) Description of the data cleaning and pre-p studying the data the following process and tools in excel were u (i) Agent Name. #NA values were found in this colu and pivot table formations. (ii) Agent IDs. The agent IDs also contains the sim (iii) Duration. The data given in column no 8 – D through the number format option for ease of duration c (iv) Wrapped by. This column had blank cells agent as well as autowrapped. Removing this blank wou wrap up the abandoned calls for better corelation and an (v) Removing blank rows. Blank rows were rem ok-ctrl Minus. (f) Any assumptions made during the projec project: - (i) Wrapped by. This column had blank cells agent as well as autowrapped. Removing this blank wou wrap up the abandoned calls for better corelation and an (ii) Assumption provided in the project. An ag agent is 4 days a month; An agent total working hrs is 9 average an agent occupied for 60% of his total actual wor a month is 30 days. 2. Approach: the analytical methods ad given in the project is discussed in the succe

doned. d and to reduce the abandon rate. mber of agents requirement. processing steps performed on the data. After used to clean the data :- umn. The same were removed. This would lead to errors in future analysis milar #NA values and were similarly removed. Duration contained both date and time format. It was changed to time calculation. s corresponding to the abandoned calls as it was not attended by the uld have led to data instability hence custom “system” name is given to nalysis. moved by pressing F5 - Go To window - Special- GoTO special- blanks- ct. The following assumptions were made during the s corresponding to the abandoned calls as it was not attended by the uld have led to data instability hence custom “system” name is given to nalysis. gent work for 6 days a week; On an average total unplanned leaves per 9 Hrs out of which 1.5 Hrs goes into lunch and snacks in the office. On rking Hrs (i.e 60% of 7.5 Hrs) on call with customers/ users. Total days in dopted for carrying out each analysis tasks eeding paras :-

TASK AN Insight Required: The task requires to analyse the average call time for i Task A: Calculate the average call time duration for all incom MICROSOFT EXCEL ANALYSIS OF TASK A: Average call time for all incoming calls answered by ag Microsoft Excel - The analysis was carried out duration column. Call status as answered and wrapped on column bar chart. Microsoft Power BI - The analysis was carried out b Call-Seconds column. Call status as answered and wra plotted on column bar chart. Insight - The analysis clearly indicated t This shows that during this time the customers are ver

NALYSIS incoming calls received by the agents during the respective time buckets. ming calls received by agents (in each Time_Bucket). MICROSOFT POWER BI gent was worked out as under :- by creating pivot table and time bucket and average time was derived from d by agent was shown in the filter field for better visualisation. Same was plotted by creating visualisations and time bucket and average time was derived from apped by agent was shown in the filter field for better visualisation. Same was that the time bucket 0900 am to 1100 am has the highest average call record. ry active as per the schedule of the day.

Insight Required: The task requires to show the total volume/ number o select time in a bucket form (i.e. 1-2, 2-3, …..). Task B: show the total volume/ number of calls coming in via cha MICROSOFT EXCEL ANALYSIS OF TASK B: The total Volume/ Number of calls coming in eac Microsoft Excel - The analysis was carried out customer phone number column. Call volume field visualisation Same was plotted combo chart with tren pie chart too. Microsoft Power BI - Similar analysis was carried ou Insight - The analysis clearly indicated Call volume during 1100 -1200 am and 1300-1400 pm active as per the schedule of the day.

of calls coming in via charts/ graphs [Number of calls v/s Time]. You can arts/ graphs i.e. Number of calls v/s Time (in each Time_Bucket). MICROSOFT POWER BI ch Time_Bucket is worked out as under :- by creating pivot table.Time bucket and number of calls was derived from setting was changed to count to show the total number of calls. For better ndline showing percentage of volume at each time bucket. Same was shown as ut in Power BI with the same data field values and visualisations. that the time bucket 1200 am to 1300 pm has the highest volume call record. showed similar traffic. This shows that during this time the customers are very

Insight Required: Propose a manpower plan required during each time Task C: To calculate minimum number of agents required in eac MICROSOFT EXCEL ANALYSIS OF TASK C: To calculate minimum number of agents required 100 is worked out as under :- Microsoft Excel - The analysis was carried out b 1. Workout the number of a total number of calls in piv 2. Create another pivot tabl percentage will be used fo corresponding agents to a 3. Calculate manpower calcu calculation is shown on th

e bucket [between 9am to 9pm] to reduce the abandon rate to 10%. ch time bucket so that at least 90 calls should be answered out of 100. MICROSOFT POWER BI in each time bucket so that at least 90 calls should be answered out of by performing the following steps.:- answered, transfer and abandoned calls along with the percentage of each wrt vot table. Shown as clustered bar chart. le with time bucket and number of calls attended shown as percentage. This or distribution of agents at each time bucket considering the volume of calls and attend the same. ulations as per the assumption provided. Detailed explanation of manpower he worksheet.

4. Using unitary method calc 52 agents will be needed. 5. Distribute the 52 agents a Microsoft Power BI - Similar analysis was carried ou Insight - with the derived data and p manpower required for optimising the desired output. Insight Required: propose a manpower plan required during each time 10%. Task D: To calculate additional number of agents required in eac

culate if 40 agents are needed to attend 70% calls then for attending 90% calls as per the call’s percentage against the time bucket. ut in Power BI with the same data field values and visualisations. percentages the mathematical calculations can be worked out calculate the . e bucket in a day. Maximum Abandon rate assumption would be same ch time bucket for undertaking night calls with abandon rate of 10%.

ANALYSIS OF TASK C: To calculate additional number of agents required is worked out as under :- Microsoft Excel - The analysis was carried out b 1. Workout the number of an 2. Workout the percentage abandoned. 3. Create another column an in each time bucket as per 4. Now derive new grand tot 5. With the new grand total w 6. Create another pivot tabl percentage will be used fo corresponding agents to a 7. Calculate manpower calcu calculation is shown on th 8. Using unitary method calc 52 agents will be needed. 9. Distribute the 52 agents a Microsoft Power BI - Similar analysis was carried ou Insight - with the derived data and p manpower required for undertaking the night calls.

in each time bucket for undertaking night calls with abandon rate of 10%. by performing the following steps.:- nswered, transfer and abandoned calls corresponding to the time buckets. of abandoned calls in each time bucket in separate column name it as 30% nd reduce the abandoned call total to 10% of total calls and workout distribution r percentage in 30% abandoned column. tal of calls after substraction 10% abandon value and 1% transfer call values. work out average calls for the day. Reduce this to 30 % for night. le with time bucket and number of calls attended shown as percentage. This or distribution of agents at each time bucket considering the volume of calls and attend the same. ulations as per the assumption provided. Detailed explanation of manpower he worksheet. culate if 40 agents are needed to attend 70% calls then for attending 90% calls asper the calls percentage against the time bucket. ut in Power BI with the same data field values and visualisations. percentages the mathematical calculations can be worked out calculate the

Building the An interactive dashboard was created both in exc MICROSOFT MICROSOFT

e Dashboard cel and power BI T EXCEL T POWER BI

Tech-Sta MICROSOFT EXCEL 2019 and POWER BI was used to carry out the anay process:- • Data cleaning tools • Pivot tables • Offset function • Use of slicers and tools • Pivot charts and visualisations in power BI • Interactive Dashboards • Link functions on the dashboard Insig Key insights or findings discovered during the project are given as under ❖ Average Calls vs Time - The analysis clearly indicated t This shows that during this time the customers are very active ❖ Call Volume vs Time - The analysis clearly indicated Call volume during 1100 -1200 am and 1300-1400 pm showed s per the schedule of the day. ❖ Manpower requirement for day shift - with the deriv calculate the manpower required for optimising the desired ou ❖ Manpower requirement for Night shift - with the deriv calculate the manpower required for undertaking the night call

ack Used ylsis of the project. The following tools were extensively used during the ghts r :- that the time bucket 0900 am to 1100 am has the highest average call record. as per the schedule of the day. that the time bucket 1200 am to 1300 pm has the highest volume call record. similar traffic. This shows that during this time the customers are very active as ved data and percentages the mathematical calculations can be worked out utput. ved data and percentages the mathematical calculations can be worked out ls.


Like this book? You can publish your book online for free in a few minutes!
Create your own flipbook