Talkdesk QM Assist™ is an optional add-on for the Quality Management (QM) application that uses the Speech Analytics application to provide searchable call transcripts, highlight key moments, analyze customer sentiment, and automatically evaluate all calls in close to real-time. This feature can replace the manual quality management process, with fully automated, Artificial Intelligence (AI) driven interaction scoring that’s custom-tailored to your unique evaluation criteria.
Note: If you wish to purchase the QM Assist add-on, please get in touch with your Customer Success Manager (CSM), so that our internal teams can configure this feature for you.
Enabling Transcriptions for Users
The enablement of any users to have their calls transcribed or not is done during the implementation phase and/or if they belong to the Ring Groups enabled for QM Assist.
For users to have the transcription card visible on their calls that have been transcribed, they must enable the Transcription Card app by following these steps:
1. Select My Apps  and click on the Transcription app settings .
2. On the Users tab  select the desired users .
3. Click Save changes .
Note: Talkdesk internal teams will make the necessary configurations to determine to which calls get transcribed and evaluated by QM Assist, in accordance with your request and needs.
How Quality Management Assist works
When the users are enabled in the Quality Management Assist app, they can view a side panel on the Evaluations page  displaying the following information:
- Full call transcript : The full call transcript is displayed on the side panel.
- Keyword search : The keyword search allows you to look up specific words or phrases in the transcript.
- “Overall contact sentiment” : At the top of the transcript, you’ll see the overall contact sentiment, as either positive, neutral, or negative. This overall contact sentiment is from the perspective of the customer.
- The sentiment for both the agent and customer side for each utterance is shown throughout the transcript itself .
Note: Agents would only see the side panel if there's a transcript for the call.
Creating Automated AI-Scoring Forms
By using the same QM form builder, customers can enable QM forms for an AI evaluation, allowing them to associate and identify specific keywords to answers, to detect intent.
AI-enabled forms are then assigned to Ring Groups/Queues for up to 100% automated scoring.
Calls within the selected Ring Groups/Queues will then be evaluated using the designated form. Multiple AI-enabled QM forms can also be created and assigned to different Ring Groups/Queues to evaluate different types of calls.
To create an automated AI scoring form, please follow these steps:
1. As an Administrator, go to the Forms tab .
2. Create a scoring form  or edit a previous one by clicking on the Edit icon .
3. To enable the form to be used in automatic call scoring calls, turn on the Enable AI evaluation toggle  at the top. This action will make AI keywords matching options  appear.
4. By clicking on the AI keywords matching options, it will open the side panel . In this AI keyword configuration panel, the user can assign the keywords expected to be spoken by the agent . The system can then do the intent-matching, while comparing the keywords and phrases on this AI-enabled form to the call transcript, to perform the evaluation.
- In the “No” answer response  (or any response which you’d want to be selected in the absence of keywords being matched on another response), if you insert the word “#fallback” , the AI system will check for any keyword matches first and, only when/if it doesn’t find a match, or has very low confidence level, will it select the response with the “#fallback”. So, an answer option cannot be left blank, and you’d need to insert this keyword on any answer option(s) which you’d want AI to select in the absence of a keyword match.
5. Associate the desired AI-enabled form with one or more Ring Groups/Queues so that the calls taken or made by agents on that Ring Group/Queue are evaluated automatically by the system.
Note: Please, avoid ambiguous questions/answers while configuring AI-enabled forms because it may cause a lack of automated results.
Generating Time-stamped Annotations
The intent is analyzed throughout the call, generating time-stamped annotations that are automatically added by AI when/where the intent was matched and based on automated scoring/keyword detection from the AI-enabled form. They are visible in the recording timeline and appear as positive or negative icons .
With these annotations it is possible to identify more easily where the answer was found, to view quick reactions (positive or negative), and quickly find relevant insights.
Viewing AI-Scoring Evaluation Results
On the Evaluations page , there are two additional evaluation statuses:
- “AI pending” : These are evaluations that the system attempted to score but wasn’t able to fill due to not finding an answer that matched, or QM Assist didn’t have enough confidence in an answer found. So, the evaluation is partially scored and needs to be reviewed, completed, and submitted by the supervisor.
- AI Pending evaluations are only visible by the supervisors/admins. When the evaluations are completed, they will have the User's ID as the evaluator and can be shared with the agent.
- “AI scored” : These evaluations have been fully completed by QM Assist and can be edited by the supervisor.
- Currently, AI Scored evaluations are only visible to supervisors and admins. If you want the agents to see them, you need to open and edit/save them, so that it fetches the supervisor’s name as the evaluator ID, and can then be visible to the agent. This way, supervisors have control of which evaluations are shared with their agents by only sharing those they edit/save.
For additional support, please reach out to your Customer Success Manager.