Talkdesk QM Assist: Overview

Talkdesk QM Assist™ is part of Quality Management (QM) and provides searchable call transcripts, highlights key moments, analyzes customer sentiment, and automatically evaluates all calls close to real-time. 

This feature can replace or augment the manual quality management process (depending on the complexity of your quality evaluation forms), with fully automated, Artificial Intelligence (AI) driven interaction scoring that’s custom-tailored to your unique evaluation criteria.  

Note: This feature requires additional licensing. Enablement and setup are conducted by Talkdesk.


In this article, you will find information on:


How QM Assist Works 

Once the QM Assist application is enabled, you can view the following information: 


  • Full call transcript [1]: The full call transcript is displayed on the side panel.
  • Keyword search [2]: The keyword search allows you to look up specific words or phrases in the transcript.
  • “Overall contact sentiment” [3]: At the top of the transcript, you’ll see the overall contact sentiment, as either positive, neutral, or negative. This overall contact sentiment is from the customer’s perspective. 
  • Each utterance’s sentiment for both the agent and the customer is shown throughout the transcript itself  [4].

Note: Agents will only see the side panel if there's a transcript for the call. 


Creating AI-enabled Forms 

By using the same QM form builder, customers can enable QM forms for an AI evaluation, allowing them to associate and identify specific keywords to answers, to detect intent. 

AI-enabled forms are then assigned to Disposition(s)/Queue(s) s for up to 100% automated scoring. 

Calls within the selected Dispositions(s)/Queue(s)  will then be evaluated using the designated form. Multiple AI-enabled QM forms can also be created and assigned to different Dispositions(s)/Queue(s) to evaluate different types of calls.

To create an automated AI scoring form, please follow these steps:


1. Click on the Create form button [1] to create a scoring form. If you prefer, instead of creating a new form, you can edit a previous one by clicking the Edit icon [2].


2. To enable the form to be used in automatic scoring calls, turn on the Enable AI evaluation toggle [3]. This action will make the AI keywords option [4] appear. If you click on it, a side panel will be displayed.


In this AI keyword configuration panel, you can assign the keyword(s) expected to be spoken by the agent [5]. The system can then do the intent-matching, while comparing the keywords and phrases on this AI-enabled form to the call transcript, to perform the evaluation. To do so, insert the keywords (they can be words, attributes, and sentences), press ENTER on your keyboard, and click “X” [6] to close the side panel.


Important Note: Please be aware that in the “No” answer response [7] (or any response which you’d want to be selected in the absence of keywords being matched on another response), if you insert the word “#fallback” [8], the AI system will check for any keyword matches first. When/if it doesn’t find a match, or has a very low confidence level, will it select the response with the “#fallback”. So, an answer option cannot be left blank, and you’d need to insert this keyword on any answer option(s) that you’d want AI to select in the absence of a keyword match.


Associating AI-enabled forms


To associate the desired AI-enabled form with one or more Dispositions(s)/Queue(s), access the “Forms” page and click on the More Options (...) button. Then, select Assign disposition/queue [1]. By doing so, the calls taken or made by agents on that disposition(s)/Queue(s) are evaluated automatically by the system.


  • It is mandatory to assign dispositions/queues to a form, otherwise, the automatic evaluation will not work. 
  • One form can be used in more than one disposition/queue, however, one disposition/queue can only have one form assigned to it.
  • Avoid ambiguous questions/answers while configuring AI-enabled forms because it may cause a lack of automated results.


Publishing AI-enabled Forms 


1. Once you’re done building the AI-enabled form, select Assign and Publish [1]


A modal will appear to choose if you want the form to be configured with “Dispositions” [2], “Queues” [3], or both. 

2. Click Publish [4].


  • A form can have multiple dispositions, but a disposition can only be assigned to one form.
  • A form can have queues and dispositions assigned to it, but the dispositions take precedence
  • If the disposition is added later and not at the end of the call, the evaluation will be done based on the Disposition(s)/Queue(s).


Generating Time-stamped Annotations 


The intent is analyzed throughout the call, generating time-stamped annotations that are automatically added by AI when/where the intent was matched and based on automated scoring/keyword detection from the AI-enabled form. They are visible in the recording timeline and appear as positive or negative icons [1]

With these annotations it is possible to identify more easily where the answer was found, to view quick reactions (positive or negative), and quickly find relevant insights.


Viewing AI-Scoring Evaluation Results


On the “Evaluations” page, there are two additional evaluation statuses:

  • “AI pending” [1]: These are evaluations that the system attempted to score but wasn’t able to fill due to not finding an answer that matched, or QM Assist didn’t have enough confidence in an answer found. So, the evaluation is partially scored and needs to be reviewed, completed, and submitted by the supervisor. 
    • AI Pending evaluations are only visible to the supervisors/admins. When the evaluations are completed, they will have the User's ID as the evaluator and can be shared with the agent.
  • “AI scored” [2]: QM Assist has fully completed these evaluations and can be edited by the supervisor. 
    • Currently, AI Scored evaluations are only visible to supervisors and admins. If you want the agents to see them, you need to open and edit/save them, so that it fetches the supervisor’s name as the evaluator ID, and can then be visible to the agent. This way, supervisors control which evaluations are shared with their agents by only sharing those they edit/save. 

Note that when there is a label in the “Queue”[3] or “Disposition” [4] columns (see example above), it means they were used for the form selection. 


If you hover the mouse over the label, an explanation appears  [5] stating that "QM Assist evaluated the interaction based on the queue or disposition:X" (“X” is the name of the queue or disposition that defined which form was used). 

All Articles ""
Please sign in to submit a request.