The initial feedback on this solution is promising. From our beta test group, we found that 88% of participants felt comfortable and psychologically safe practicing with it - suggesting people are ready for this technology. Furthermore, we found that the conversations felt very human-like (especially compared to traditional chatbots) - suggesting that this technology is ready for people. Finally, we found that 80% of participants believe this will improve speed to competency due to its repeatability (every conversation feels different) and relevant, personalised feedback.
The speed at which we become competent in doing our job correlates to our satisfaction, confidence, and overall company productivity. Through the co-creation of the first-ever AI Customer, we believe CARE agents can learn quickly through practice rather than theory. Of course, there are things we can improve on and mistakes we've made: the feedback isn't always spot on, and sometimes the AI doesn't recognise call-handling errors. It's also a little too patient and polite of a customer. The most unrealistic aspect! Our next step is to iterate and improve the AI based on feedback and then introduce it into our training program so agents can see the benefit.