Results and Analysis
Last updated
Last updated
After user surveys and usability testing, we dove into both quantitative and qualitative data analysis. For surveys, Google Forms' built-in summaries helped us discover key insights. For usability testing, we combined statistical analysis with qualitative inferences based on users' think-aloud comments. This combined approach helped us to identify potential UX issues and suggest improvements for the future.
We used the response summary provided by Google forms to help us derive inferences on general job search journey of the survey participants. We tried to identify their preferences and usage patterns as well as user workarounds or recommentations.
53.2% of people responded that they always or often customize their CV/resume that align with the specific requirements of the job position
75% responded that they search for job opportunities by job title
11/32 people ranked salary and benefits as the most important factor in considering a job opportunity
56.3% users responded that they frequently or occasionally update or adjust their search filters while on job search platforms to refine their search.
Total Number of Participants - 5
Total Number of Tasks - 12
Number of expert user - 1
We collected the participants' performance (i.e clicks/steps taken to complete the tasks and time taken to complete the tasks) using the below google sheets. We also recorded the performance of an expert user for both metrics (one of us from the team) . Following this we calculated the average values of both for all participants and also the deviation and standard deviation to highlight key areas of UX issues and prioritize the issues.
A. Click test - number of clicks participants took to complete the task
We first collected data on number of clicks taken by users to complete individual tasks.
Standard Deviation: Next we calculated the standard deviation of the click/steps of participants P1 - P5 for individual tasks using google sheets formula (ex: = STDEVP(C2:G2))
Higher standard deviation indicates more inconsistency among participants in completing the task.
Tasks with a higher standard deviation may have a less predictable or more varied user experience.
So based on this we ranked the tasks and highlighted the top 6 tasks that have the highest standard deviation values.
B. Time taken by participants to complete each task vs expert user's time
We collected data on the time taken by each of the 5 participants to complete the indivual tasks.
Expert time - We calculated the time taken by the expert user to complete the same tasks
Deviation value - we calculated the average time taken by all 5 partipants for each task and then the absolute deviation value between average time and expert time for the same tasks using the formula: =abs(H20-I20)
This approach focused on tasks where the average steps taken by participants deviate significantly from the required steps.
Tasks with higher deviations suggest that participants, on average, are diverging more from the expected behavior, indicating potential usability issues.
So based on this we ranked the tasks and highlighted the top 6 tasks that have the highest time deviation values between expert time and average time.
Finally we analysed the list of tasks with highest standard deviation in clicks and deviation in time to identify the top 5 overlapping issues.
We furthered improved this list by considering the inferences from the qualitative feedback we received during the usability testing as users were encouraged to think aloud as well as the suggestions from the user survey.