How can we Help?

Talkdesk QM Assist

Follow

Talkdesk QM Assist is an optional add-on for the Quality Management (QM) application that uses the Speech Analytics application to provide searchable call transcripts, highlight key moments, analyze customer sentiment, and automatically evaluate all calls in close to real-time. This feature can replace the manual quality management process, with fully automated, Artificial Intelligence (AI) driven interaction scoring that’s custom-tailored to your unique evaluation criteria.  

Note: If you wish to purchase the QM Assist add-on, please get in touch with your Customer Success Manager (CSM), so that our internal teams can configure this feature for you. 

 

Enabling Transcriptions for Users 

The enablement of any users to have their calls transcribed or not is done during the implementation phase and/or if they belong to the Ring Groups enabled for QM Assist. 

For users to have the transcription card visible on their calls that have been transcribed, they must enable the Transcription Card app by following these steps: 

Enabling_QM_Assist_for_Agents_-_1_2_CERTO.png

1. Select My Apps [1] and click on the Transcription app settings [2].

Enabling_QM_Assist_for_Agents_-_3_4_5_CERTO.png

2. On the Users tab [3] select the desired users [4].

3. Click Save changes [5].

Note: Talkdesk internal teams will make the necessary configurations to determine to which calls get transcribed and evaluated by QM Assist, in accordance with your request and needs.

 

How Quality Management Assist works 

When the users are enabled in the Quality Management Assist app, they can view a side panel on the Evaluations page [1] displaying the following information: 

How_Quality_Management_Assist_works_1_2_3_4_5_certo..png

  • Full call transcript [2]: The full call transcript is displayed on the side panel.
  • Keyword search [3]: The keyword search allows you to look up specific words or phrases in the transcript.
  • “Overall contact sentiment” [4]: At the top of the transcript, you’ll see the overall contact sentiment, as either positive, neutral, or negative.  This overall contact sentiment is from the perspective of the customer. 
  • The sentiment for both the agent and customer side for each utterance is shown throughout the transcript itself  [5].

Note: Agents would only see the side panel if there's a transcript for the call. 

 

Creating Automated AI-Scoring Forms 

By using the same QM form builder, customers can enable QM forms for an AI evaluation, allowing them to associate and identify specific keywords to answers, to detect intent. 

AI-enabled forms are then assigned to Ring Groups/Queues for up to 100% automated scoring. 

Calls within the selected Ring Groups/Queues will then be evaluated using the designated form. Multiple AI-enabled QM forms can also be created and assigned to different Ring Groups/Queues to evaluate different types of calls.

To create an automated AI scoring form, please follow these steps:

Creating_Automated_AI-Scoring_Forms___-_1__2_certo.png

1. As an Administrator, go to the Forms tab [1].

2. Create a scoring form [2] or edit a previous one by clicking on the Edit icon [3].

Creating_Automated_AI-Scoring_Forms_-_4_5_certo.png

3. To enable the form to be used in automatic call scoring calls, turn on the Enable AI evaluation toggle [4] at the top. This action will make AI keywords matching options [5] appear.

Creating_Automated_AI-Scoring_Forms__-_6_7_8_9_certo.png

4. By clicking on the AI keywords matching options, it will open the side panel [6]. In this AI keyword configuration panel, the user can assign the keywords expected to be spoken by the agent [7].  The system can then do the intent-matching, while comparing the keywords and phrases on this AI-enabled form to the call transcript, to perform the evaluation. 

  • In the “No” answer response [8] (or any response which you’d want to be selected in the absence of keywords being matched on another response), if you insert the word “#fallback” [9], the AI system will check for any keyword matches first and, only when/if it doesn’t find a match, or has very low confidence level, will it select the response with the “#fallback”. So, an answer option cannot be left blank, and you’d need to insert this keyword on any answer option(s) which you’d want AI to select in the absence of a keyword match.

5. Associate the desired AI-enabled form with one or more Ring Groups/Queues so that the calls taken or made by agents on that Ring Group/Queue are evaluated automatically by the system.

Note: Please, avoid ambiguous questions/answers while configuring AI-enabled forms because it may cause a lack of automated results.

 

Generating Time-stamped Annotations 

Generating_Time-stamped_Annotations_-_1_certo.png

The intent is analyzed throughout the call, generating time-stamped annotations that are automatically added by AI when/where the intent was matched and based on automated scoring/keyword detection from the AI-enabled form. They are visible in the recording timeline and appear as positive or negative icons [1]. 

With these annotations it is possible to identify more easily where the answer was found, to view quick reactions (positive or negative), and quickly find relevant insights. 

 

Viewing AI-Scoring Evaluation Results

Viewing_AI-Scoring_Evaluation_Results_-_1_2_3_certo.png

On the Evaluations page [1], there are two additional evaluation statuses:

  • “AI pending” [2]: These are evaluations that the system attempted to score but wasn’t able to fill due to not finding an answer that matched, or QM Assist didn’t have enough confidence in an answer found. So, the evaluation is partially scored and needs to be reviewed, completed, and submitted by the supervisor. 
    • AI Pending evaluations are only visible by the supervisors/admins. When the evaluations are completed, they will have the User's ID as the evaluator and can be shared with the agent.
  • “AI scored” [3]: These evaluations have been fully completed by QM Assist and can be edited by the supervisor. 
    • Currently, AI Scored evaluations are only visible to supervisors and admins. If you want the agents to see them, you need to open and edit/save them, so that it fetches the supervisor’s name as the evaluator ID, and can then be visible to the agent. This way, supervisors have control of which evaluations are shared with their agents by only sharing those they edit/save. 

 

For additional support, please reach out to your Customer Success Manager.

All Articles ""
Please sign in to submit a request.