Considering Form Types: AI-only, both AI and manual, or hybrid/blended
When starting to incorporate AI/automated scoring into your quality assurance program and planning to build forms (scoring templates), some questions that may come to mind are:
- Should I build a separate form to be used by AI/for automated scoring only?
- Should I have a separate form for manual scoring in addition to the AI form?
- Should I build a hybrid form that is a blend of both AI/automated scoring + manual?
There is no right or wrong answer. It depends on what you are looking to achieve with AI/automation, where you are in your quality program maturity, staffing/resource availability, and other considerations.
Advantages and disadvantages of each approach:
- The first approach is to create a separate AI-enabled form that is strictly dedicated to automated scoring that will allow you to get fully AI-Scored evaluations and clearly see agent quality based on AI scores. You can then focus on identifying lower-scored questions or lower-performing agents in order to know where/on whom to spend time delivering additional feedback, coaching, training, among others. In this scenario, little to no time will be spent performing agent evaluations manually.
However, this option can be somewhat limiting because it restricts your form to only use questions that can be easily understood and evaluated by a machine/AI tool. Note: See the next section below (“Best Practices for Building an AI-enabled Form (creating AI questions”) for more information on this topic.
If you plan to ask more subjective questions on your form (whether emotional/soft skill assessments or scale type of questions/response choices), you might want to consider either building a separate form to be used for manual scoring only or a hybrid/combination form.
- A second approach is to have a separate manual form (separate from/in addition to an AI-form) that allows you to ask questions/perform evaluations on more complex questions that require a human ear. However, it will result in having separate data (evaluations and scores) for the same agent/interaction. You’re having to answer fewer questions manually (because some are being answered by AI) but you’re likely still performing the same number of evaluations as before, unless you choose to only focus on reviewing and potentially manually scoring ones that received low AI scores. This would be a more targeted manual evaluation approach, versus based on random selection.
- A third option is to do a hybrid/combination form where some questions are intended to be answered by AI (e.g., have AI keywords/phrases associated with them) and some are intended to be manually evaluated by a human. An advantage of this approach is that you only end up with one evaluation/one score for that agent/interaction (versus two separate evaluations). However, because only some questions are enabled with AI-keywords, the evaluation can never be fully scored by AI and will always result in a status of “AI Pending” (vs. “AI Scored”), which is similar to a “Draft”. In order for that “AI Pending” evaluation to turn into a “Completed” evaluation (i.e., actual score), it will need to be finished by a human (i.e., remaining questions answered manually). This will result in all evaluations being in an “AI Pending” state, while only some (small % subset) can be manually updated by a human. We do provide a way to expire or clean up those “AI Pending” evaluations after “X” days, where “X” is configurable, which does help to reduce the “noise”. This will save some evaluation time by having some questions pre-answered by AI, but requires human intervention in order to get any quality data/scores for tracking/reporting, among other items.
Note: For more information on configuring the expiration settings for AI Pending evaluations and/or reducing the % of automatically scored evaluations, please read the related article here.
The three approaches described above, and the pros and cons of each that are listed below, may help you decide on the direction that is best for your organization’s quality assurance program.
AI Form Only |
AI Form + Manual Form (separate forms) |
Hybrid Form (AI and manual blended) |
|
PROs |
Clear scores/results Little manual effort |
Allows evaluating based on complex behaviors or soft skills. |
Can include more complex, soft skill-type questions and responses on the manual form. Reduces manual effort by AI pre-scoring the “easier”/binary questions. |
CONs |
The form can only have questions and responses that can be easily scored by a machine/AI |
Requires manual scoring effort on top of AI scoring. Results in two sets of separate data/scores for the same agent interaction (manually scored and AI scored). |
All the evaluations will result in an AI-Pending state (similar to a manual Draft), meaning that they will not have any scores, and will need to be finished by a human and manually submitted. |
Best Practices for Building an AI-enabled Form (creating AI questions)
- Ask questions that are easily understandable by a machine/AI:
- If possible, avoid assessments related to emotion, such as “Did the agent show empathy?”. It will be virtually impossible for a machine to determine this unless you have clearly identified what that emotion (empathy in this example) sounds like in terms of what the agent would say to demonstrate that emotion (i.e., empathy) and have added those statements as AI keywords/phrases to the question response option(s). See below for more tips on adding response options/answers.
- Use “Yes”/ “No” or “True”/ “False” or “Pass”/ “Fail” (binary type) responses/answers:
- This will allow you to add AI keywords/phrases to the responses/answers to indicate whether the agent said or did not say something. AI scoring is only based on what the agent said (not the customer/caller) so, when adding AI keywords, focus on what you would expect the agent to say. The system uses “fuzzy matching” to look for “intent” so it will also check for related phrases that are in the same context as what was specified.
- QM Assist focuses on assessing agent quality. If you’re looking for a more comprehensive assessment of the entire interaction, including the customer side, please look into Talkdesk Interaction Analytics™ documentation.
- When using binary options such as “Yes”/ “No”, it is only necessary to add AI keywords/phrases to the “positive” option (i.e., “Yes”) in order to indicate what that answer would sound like based on what the agent would say. The “negative” option (i.e., “No”) can be left blank. The system will analyze the transcript to see whether it finds a match corresponding to the “positive” answer and, if so, then this answer will be chosen. If the system does not find a match for the “positive” answer, then the “negative” answer can be set to be automatically selected provided that it is configured with the AI keyword: #fallback. This indicates that it should be chosen if another answer for that question is not matched. The #fallback AI keyword is not configured, and when another answer option is not matched, then this question would be left unanswered (AI Pending).
- Do not enable the “N/A” response/answer option. The system has no way of determining whether something applies or doesn’t apply to a particular interaction/call, as it doesn’t have the same context as a human.
- Assign the AI-enabled form to queue(s)/ring group(s):
- Assigning tells the system which calls to AI evaluate using which AI forms. Because of this, a form can be assigned to many queues (ring groups) but a single ring group can only belong to one form. Keep in mind that this is only needed for AI-enabled forms, not for manual evaluation forms.
- Evaluating calls with AI (auto-scoring):
- Please keep in mind that for the AI/auto-scoring to happen, the STT (speech-to-text) transcript for that call recording must be available in order to perform the assessment of the transcript against the form with AI-enabled intents (keywords/phrases). This happens in near-real-time, but there may be a lag between when the call takes place/the recording is made and when the recording is transcribed in order to evaluate the call.
Making Ongoing/Future changes to Keywords on a “Published” Form
Once you publish a form in QM Assist (AI-enabled or manual), if you want to make any changes to it, you will need to duplicate that form to turn it into a draft, then make your edits, and subsequently publish the new version (while potentially deactivating the old one). This includes making changes to AI keywords/phrases.
To know more about QM Assist, please visit this article.