Talkdesk QM Assist: Overview

Talkdesk QM Assist is an optional add-on for the Quality Management (QM) application that provides searchable call transcripts, highlights key moments, analyzes customer sentiment, and automatically evaluates all calls in close to real-time. This feature can replace or augment the manual quality management process (depending on the complexity of your quality evaluation forms), with fully automated, Artificial Intelligence (AI) driven interaction scoring that’s custom-tailored to your unique evaluation criteria.  

Note: If you wish to purchase the QM Assist add-on, please get in touch with your Customer Success Manager (CSM), so that our internal teams can configure this feature for you. 


How Quality Management Assist works 

When the users are enabled in the Quality Management Assist app, they can view the following information: 


  • Full call transcript [1]: The full call transcript is displayed on the side panel.
  • Keyword search [2]: The keyword search allows you to look up specific words or phrases in the transcript.
  • “Overall contact sentiment” [3]: At the top of the transcript, you’ll see the overall contact sentiment, as either positive, neutral, or negative.  This overall contact sentiment is from the perspective of the customer. 
  • The sentiment for both the agent and the customer for each utterance is shown throughout the transcript itself  [4].

Note: Agents will only see the side panel if there's a transcript for the call. 


Creating Automated AI-Scoring Forms 

By using the same QM form builder, customers can enable QM forms for an AI evaluation, allowing them to associate and identify specific keywords to answers, to detect intent. 

AI-enabled forms are then assigned to Ring Groups/Queues for up to 100% automated scoring. 

Calls within the selected Ring Groups/Queues will then be evaluated using the designated form. Multiple AI-enabled QM forms can also be created and assigned to different Ring Groups/Queues to evaluate different types of calls.

To create an automated AI scoring form, please follow these steps:


1. As an Administrator, click on the Create form button [1] to create a scoring form.

2. You can also edit a previous one by clicking on the Edit icon [2].


3. To enable the form to be used in automatic scoring calls, turn on the Enable AI evaluation toggle [3] at the top. This action will make the AI keywords option [4] appear.


4. By clicking on AI keywords, a side panel will be displayed. In this AI keyword configuration panel, the user can assign the keywords expected to be spoken by the agent [5]. The system can then do the intent-matching, while comparing the keywords and phrases on this AI-enabled form to the call transcript, to perform the evaluation.


Important Note: Please be aware that in the “No” answer response [6] (or any response which you’d want to be selected in the absence of keywords being matched on another response), if you insert the word “#fallback” [7], the AI system will check for any keyword matches first and, only when/if it doesn’t find a match, or has very low confidence level, will it select the response with the “#fallback”. So, an answer option cannot be left blank, and you’d need to insert this keyword on any answer option(s) that you’d want AI to select in the absence of a keyword match.


5. On the “Forms” page, click on the More Options button. Then, select Assign queue [8] to associate the desired AI-enabled form with one or more Ring Groups/Queues. By doing so, the calls taken or made by agents on that Ring Group/Queue are evaluated automatically by the system.



  • It is mandatory to assign one or more queues to a form, otherwise, the automatic evaluation will not work. 
  • One form can be used in more than one queue, however, one queue can only have one form assigned to it.
  • Please, avoid ambiguous questions/answers while configuring AI-enabled forms because it may cause a lack of automated results.


Generating Time-stamped Annotations 


The intent is analyzed throughout the call, generating time-stamped annotations that are automatically added by AI when/where the intent was matched and based on automated scoring/keyword detection from the AI-enabled form. They are visible in the recording timeline and appear as positive or negative icons [1]

With these annotations it is possible to identify more easily where the answer was found, to view quick reactions (positive or negative), and quickly find relevant insights.


Viewing AI-Scoring Evaluation Results


On the Evaluations page [1], there are two additional evaluation statuses:

  • “AI pending” [2]: These are evaluations that the system attempted to score but wasn’t able to fill due to not finding an answer that matched, or QM Assist didn’t have enough confidence in an answer found. So, the evaluation is partially scored and needs to be reviewed, completed, and submitted by the supervisor. 
    • AI Pending evaluations are only visible by the supervisors/admins. When the evaluations are completed, they will have the User's ID as the evaluator and can be shared with the agent.
  • “AI scored” [3]: These evaluations have been fully completed by QM Assist and can be edited by the supervisor. 
    • Currently, AI Scored evaluations are only visible to supervisors and admins. If you want the agents to see them, you need to open and edit/save them, so that it fetches the supervisor’s name as the evaluator ID, and can then be visible to the agent. This way, supervisors have control of which evaluations are shared with their agents by only sharing those they edit/save. 


For additional support, please reach out to your Customer Success Manager.

All Articles ""
Please sign in to submit a request.