The Recruitment Automation Blog

Hire better and faster with Talkpush, Let’s jump on a call and explore how Talkpush can help your team.
FEATURED ARTICLE

Strategic Recruitment with WhatsApp: a comprehensive guide for modern hiring

April 11, 2024
5 min read
WhatsApp, renowned as the world's leading instant messaging platform, enjoys remarkable popularity, with over 93% usage among the 18 to 65-year-old demographic across key regions in Asia and...
Read article

Receive the latest on recruitment automation directly to your inbox

Filters
Topics
      All Post
      Blog
      5 min read
      Roselle Lim
      Roselle Lim
      Conversational Designer

      4 Lessons Learned from our Candidate Satisfaction Study

      Ready to level up your recruitment?
      Let’s jump on a call and explore how Talkpush can help your team.

      Talkpush shares the results of its latest Conversational Design Study and recommends a better way to talk with candidates using chatbots.


       

      Since 2016, Talkpush has been designing and developing chatbots for recruitment. As we do every year, our Conversation Design team embarked on a candidate experience study to validate their hypothesis on how bots should be designed and learn more about the needs and wants of candidates across the globe.

       

      Talkpush offers three tiers of bot plans to clients. Stanley is our basic bot package designed to efficiently capture candidate pre-screening data . Personas are bots with classic recruiter personalities, designed for a more personal experience. Lastly, our Custom bots are fully customized to the customers’ brand. Each of these plans are adapted to meet the language requirements of our clients’ regions. We decided to compare the three bot types in terms of candidate experience because good design decisions rely on good research and data. We wanted to know what was working for candidates, and what could be improved to ensure candidates are achieving their goals when interacting with our bots.

       

      Methodology

      From May to August 2019, we asked candidate to rate their experience as soon as they completed their job application. If the rating was positive, we asked them to complete an NPS survey. If the rating was average or poor, we asked them to give us feedback to improve on top of the NPS survey.

      candidate ratingsIn September, we extracted all this applicant feedback data for analysis. We analyzed the data, capturing what the candidates were saying in their own words, using their feedback to inform us what made their experience just average or poor, and what we could do to change it. We also factored in that ratings will lean towards the positive, as most candidates tend to rate the experience as satisfactory, thinking that it will influence the hiring decision.

       

      We analyzed 4,960 responses. Respondents came from six countries, with in Asia Pacific and Latin America representing 90% of the volume. It was a mountain of data which, ironically, required a human to analyze because of language distinctions and subtleties.We first divided the feedback by customer. We then categorized the customers by bot type subscription. Then we evaluated the usefulness of each feedback and classified them as valid or invalid. We then grouped the valid feedback as positive, negative or neutral suggestions. For the negative feedback, we assigned a theme that described the problem.

       

      But enough with the HOW… let’s tell you WHAT we found out!

       

      Lesson #1: Placement of Buttons for Multiple Choice Questions is KEY

      unnamed (3)What the huh? 20 to 39% of respondents (the green section) did not answer the question properly, depending on the bot plan.

       

      Our research showed that more than 52% of all the feedback received was invalid! We scratched our heads and wondered why? Candidates were either leaving comments like “Thanks” and “Bye” (or even an emoji), when asked for feedback, or leaving comments not related to the chatbot experience, but the hiring company itself. There had to be something wrong in the way our bots were asking.

       

      It turns out there was something wrong! While worthless comments (like “thanks” or “bye”) did make up the majority of invalid feedback, the big surprise was that a lot of it was also caused by user error. Candidates said they didn’t mean to rate the experience poorly, and actually had no feedback to give. Some users said they just clicked the button by accident and didn’t mean to give a rating at all.

      candidate feedbackWe were using buttons to ask for rating as Great, Smooth, Okay, Poor, in left to right order. Users were clicking on Okay or Poor which took them to the feedback section. And alas! The truth came out: the button placement was causing the human error, since users might be used to tapping on the rightmost button on mobile because it’s usually where the primary button is located. They also could have been tapping the button just out of instinct.candidate rating buttons

      We have since changed the experience rating survey from buttons to cards to avoid any more user errors. And the results are outstanding. We have received zero user errors in September and October applicant feedback data. It was a well learned lesson about button placement and we have applied it to many other use cases.

       

      Lesson #2: Timing of NPS Question is Critical

      chatbot conversationWe designed the chatbots to automatically send an NPS survey at the end of the pre-screening process to give our clients easily captured vanity data. 8% of candidates participated in that survey. Out of that 8%, 47% were very likely to recommend the company to their friends, while 38% were not likely to recommend the company at all.

       

      So, who was getting the high scores? Companies with more established brands scored better than their less known counterparts. Additionally, certain industries scored lower, such as the call center industry. Candidates said they were not willing to refer their friends if the company is unknown to them. They also said their friends are not likely to be interested in working at a call center. Interestingly, a chunk of candidates said they would only refer their friends if they themselves got hired.

       

      We have since removed the automated NPS rating at the end of the pre-screening process because of low participation and because they were a bit biased. The likelihood of candidates referring a friend was directly related to their own chances of being hired. We also changed our “Refer a Friend” design. Previously, we put this button in our greeting’s main actions. Now that we know candidates are not likely to refer a company they have no information about, and before they get hired, we have placed the Refer a Friend option only when a candidate completes their application and when they are shortlisted.

       

      Lesson #3: Recruitment Satisfaction Extends to the Brand

      We found that 60% of candidates left positive feedback, while negative and neutral feedback were almost tied at 21% and 20% respectively. The positive feedback showed that candidates are generally “amazed” by the automated chatbot-enabled recruitment process, especially people who have never experienced it before. They describe the process as “convenient,” “great service” and “systematic.” They love the technology and find that the use of messaging for recruitment is perfect. Candidates said they were impressed by the speed and efficiency, and commend the hiring company for being progressive.

       

      Another compelling finding was that candidates also complimented the hiring company for being “considerate” and generous. They saw the automated recruitment process as a big-hearted move to reach out to all kinds of potential talent. Candidates who live out of town said they were grateful for the opportunity to get heard without having to travel to the recruitment site. Mothers with young children said they were glad to be able to apply for a job and make a positive impression without leaving their house. Candidates said they hope more companies can be as helpful in providing access to jobs as easily and efficiently.

      NPS ScoreOf our three bot plans, Custom bots had the most volume of applicant feedback, as well as the most positive feedback. It turns out that candidates responded well to personalized conversation, and found custom bots to be more engaging. This is because our custom bots used more emojis, gifs, sometimes spoke in slang, and even had brand-related names (read Why your Recruitment Chatbot Needs a Personality for more). Candidates wanted to interact more with a custom chatbot, entertained by what it can do, sometimes even to the detriment of getting the job application process completed! Candidates who are satisfied with the chatbot interaction extended their satisfaction with the brand, and left a positive NPS rating.

       

      Lesson #4. Employers Need to Adapt their Screening Techniques to the Automated Process

      Our customers use the Talkpush CRM to create job openings that candidates can choose from in SMS, Messenger or Whatsapp. Recruiters can set up screening questions per job opening inside the Talkpush CRM. It turns out that candidates love answering them and they want the chance to speak more about their relevant experience. In fact, the number 1 neutral suggestion from candidates across all countries, across bot types, is to improve the screening questions. Candidates ask for more relevant questions to the job, more personal questions, better multiple choice options, and to be able to freely type their answer. In other words: make it easy for them to tell you more!

       

      During a phone interview, recruiters’ questions can be free flowing based on candidate’s answers. They can clarify or reword a vague question. Candidates are free to express themselves or elaborate on their answers. In an automated bot interview, the candidates are given a set of questions, often with multiple choice answers. Candidates are forced to choose from the available answers even when it is not their real answer. They also find the questions are not related to or contradict what they just said.

       

      So, here’s our take. Automated applications can be a joy if recruiters study the candidate responses and design their questions to be challenging but enjoyable enough to be completed. Recruiters need to give candidates a chance to stand out through their answers because it’s the main way they can differentiate themselves and feel hopeful about the outcome. The pre-screening chatbot questions can make or break the automated pre-screening experience: write them with empathy.

       

      From Research to Action

      The feedback data was a gold mine of conversational design insights. After the study, we wasted no time in implementing our learning. Our efforts paid off recently when we gathered applicant feedback data again for September and October. In those two months, the volume of feedback quadrupled over the same period(from from 4,960 to 19,872) . Along with the increase in quantity, we also increased the quality, getting more valid feedback. And more importantly, positive ratings went from 60% to 71%.

       

      Designers should always use data to inform design. A few weeks’ worth of research and analysis had a long-term impact on our customers’ goals and our own design team ’s personal goals. Customers are always asking us to roll out new features, but we should should really listen to candidates to drive most of that future design work. After all, they are the ones who we need to attract.

       

      What do you think? If you found this study interesting, we recommend reading this article with tips for engaging with your audience using a chatbot through a widely preferred method: WhatsApp for Business.