XM Institute faculty answers common questions we hear from XM professionals.
CX Metrics
1. Which metric(s) should we use for our CX program? (NPS, CSAT, etc.)
2. What role should competitive benchmarking play in our CX program?
3. How should we set targets for our CX metrics?
4. Should we use NPS as our global CX metric?
5. Should we use NPS to capture interaction feedback?
6. Should we tie compensation to CX metrics?
7. How do we prevent employees from asking customers for good scores?
Survey Methodology
1. Will the direction of response scales affect our survey results?
3. Should we use 4-point scales for survey questions?
4. Should we ask open-ended survey questions?
5. What is a “double-barreled” survey question?
6. Should we use 7-point or 5-point scales for survey responses?
7. How much survey sample is enough?
8. What’s a good response rate for my CX survey?
9. Is it okay to use incentives to improve our survey response rates?
10. How should we distribute our surveys?
11. What is a good response rate for my EX survey?
13. What is the difference between anonymous and confidential employee surveys?
14. What is a good confidentiality threshold for my EX survey?
Voice of the Customer
1. What are alternatives (other than surveys) to listen for experience feedback?
2. What is the best timeframe for closing the loop with customers?
3. How often should we be asking customers for interaction feedback?
4. How often should we ask customers for relationship feedback?
6. How can we promote the idea of a centralized survey program to other departments?
Employee Listening
1. Which questions should we have in an engagement survey?
2. What are the key elements of an employee pulse survey?
3. How frequently should we do an employee pulse survey?
4. How long should an employee engagement survey be?
6. Who is responsible for taking action on employee feedback?
7. What is the best practice for including/excluding partial responses in reporting?
8. What are the best open-ended questions to ask in an employee survey?
9. What is the best way to share open-ended employee comments with leaders?
10. Should we take action on survey themes/categories or specific survey items?
12. How do we determine the “moments that matter” when developing a lifecycle program?
Can’t find the answer you’re looking for?
- Ask your peers. Get help from your peers in the XM Pro’s online forum
- Ask an expert. Ask a Qualtrics expert during our weeklong Expert Q&A sessions
- Submit a question. If you think your question is one that is shared by a large number of XM professionals,
then submit it for consideration to be added to this page.
XM ROI
1. How do we measure the ROI of CX?
There are a number of different approaches you can take to measure the ROI of your CX efforts. Here’s an overview of one we often recommend: 1) Establish a core CX metric – such as NPS, satisfaction, or a blended metric – that is representative of customer experience and measurable by an individual customer, 2) Identify measurements that represent loyalty behaviors that have a financial impact on your business (e.g. additional purchases, churn rate, share of wallet, making recommendations to friends, etc.) and are measurable by an individual customer, 3) Separate out customers by their core CX metric scores, grouping them from high to low. We also recommend separating these scores by key customer segments, 4) Then, using these groupings and the loyalty behaviors you identified in Step 2, either calculate the average loyalty levels for each grouping of CX metrics’ scores (which will show you how much increased loyalty you can expect from each group if you increase their CX ratings) OR run a regression analysis to see how the core CX metric score correlates to those different loyalty behaviors, 5) Regardless of which Step 4 option you use, the final step is to calculate the value of a change in customer loyalty. So for each customer segment, translate a 1% increase in the core CX metric to some resulting change in your key loyalty metric(s). Then calculate the dollar value for that 1% change by translating the change in the loyalty metric(s) to a dollar value. Ultimately you should be able to communicate the ROI in a form like this: “A W% change in for this customer group will result in an X% change in , which will drive an increase of $Y in revenues and/or a decrease of $Z in costs.”
For more information, see:
– Report: Global Study: ROI of Customer Experience, 2023
– Blog: Metrics: Recipes for Proving ROI and Elevating XM
– Report: U.S. Consumer Journeys Needing Improvement Across 22 Industries, 2023
2. What are some quantifiable benefits XM can provide to an organization?
This is a big topic, but here are five ways well-executed XM programs and initiatives can positively impact an organization: 1) XM helps you identify and recover from bad experiences, both on an individual level – surfacing where a customer or employee has had a negative experience so you can intervene to mend the relationship – as well as on a broader scale – identifying when something more systemic, like a product, journey, or process, is broken so you can fix it to prevent future bad experiences, 2) XM increases retention and loyalty. Our research shows that positive experiences lead to higher loyalty and engagement. For CX, this means customers are likely to demonstrate behaviors like increasing spend, recommending you, forgiving mistakes, and renewing contracts or subscriptions. For EX, engaged employees are less likely to look for a new job and are more willing to recommend the company to others, 3) XM also increases lifetime value. When an organization delivers consistently positive experiences, customers are more likely to purchase more and try new offerings, while employees are more likely to engage in productive behaviors like doing something unexpectedly good for the company, going above and beyond in their roles, or recommending an improvement to the company. 4) XM reduces cost-to-serve. Not only do happier customers and employees complain less, but XM will help uncover redundancies and inefficiencies so you can streamline, standardize, simplify, and even automate existing processes, ultimately saving both human and financial capital, 5) XM decreases cost-of-acquisition. Having a reputation for delivering excellent experiences makes it cheaper for you to reach new customers and employees by improving positive word-of-mouth and reducing the need for sales and marketing resources. While all five of these are potential benefits of an XM program, it’s important to keep in mind that this value will only be realized when an organization embraces the discipline of XM and uses insights to continuously listen and adapt to customer or employee needs in a way that helps move these five levers.
For more information, see:
– Report: Global Study: ROI of Customer Experience, 2023
– Blog: $3.7 Trillion of 2024 Global Sales are at Rusk Due to Bad Customer Experiences
– Report: Global Study: What Happens After a Bad Experience, 2024
CX Metrics
1. Which metric(s) should we use for our CX program? (NPS, CSAT, etc.)
There is no silver bullet CX metric. Each one has its own positives and negatives, and they tend to all move in the same direction. For measuring overall relationship health, common metrics include Net Promoter Score (NPS), Overall Satisfaction (OSAT), Likelihood to Return, and Likelihood to Renew. To measure transactional experiences, organizations often use Customer Satisfaction (CSAT), Customer Effort Score (CES), or perceptions around things like agent friendliness, helpfulness, professionalism, or content effectiveness. When selecting a core CX metric to anchor your program on, select one that meets five key criteria: 1) It clearly relates to your business or brand strategy (like retention, renewals, market penetration, or cost reduction), 2) It reflects the key loyalty behaviors you want to manage and change, 3) It is easy for people across your organization to understand, 4) It is easy to take action on, and 5) It doesn’t create a negative or complicated experience for customers taking the survey. Given most potential CX metrics meet these five criteria and tend to be highly correlated with one another, rather than spending too much time obsessing over choosing the “right” measurements, focus instead on using whichever metrics you do select to drive meaningful action across your organization.
For more information, see:
– Report: Five Steps for Building a Strong CX Metrics Program
– Launchpad: Building an XM Metrics Program
– Blog: The Two Ultimate Questions for XM Metrics
2. What role should competitive benchmarking play in our CX program?
Any data – including competitive benchmarking – that allows you to compare the experience you deliver to the experiences delivered by your competitors (both direct and indirect) is valuable information for shaping your CX strategy and assessing how you compare across industries. That said, benchmarks should only be one of many tools your CX program uses to evaluate the health of your experiences and relationships. To use benchmarks in a way that strengthens, rather than detracts from, your program, we recommend following these five tips: 1) Focus on driving improvements, not scorekeeping, 2) Obsess about delivering the right experiences to the right people, not simply comparing performance to current competitors, 3) Compare data within studies, not across studies with potentially different methodologies, 4) Set goals around your individual key driver metrics, not general benchmarked metrics, and 5) Rely more on internal insights, which inherently have more depth around critical organization-specific topics, not generic external data.
For more information, see:
– Blog: Five Recommendations for De-Emphasizing Benchmarking
– Blog: Existing Metrics Aren’t Enough for Customer-Centric Strategy
3. How should we set targets for our CX metrics?
No customer experience metric is inherently meaningful in and of itself. To make a metric actionable, you need to set clear and realistic targets around it based on how it relates to your business and financial objectives. This target should be aligned with the level of effort and resources your organization is willing to invest in improving the score. Ground your goals in a baseline (we recommend at least three data points over three reporting periods), and set it as a range rather than an absolute number. We also suggest using ROI modeling to identify the point of diminishing returns. In other words – at what point does the investment in improving the metric stop driving enough revenue or competitive advantage to justify the cost?
For more information, see:
– Report: Five Steps for Building a Strong CX Metrics Program
4. Should we use NPS as our global CX metric?
NPS is not a particularly good global metric as consumers in different regions answer the question very differently. For example, when Indian consumers like a given company, they give it an average NPS of 64. However, when Japanese consumers like a given company, they give it an average NPS of -44. One reason NPS differs so much is that there are cultural differences informing how consumers respond to the question. In some countries, making recommendations is a more common activity than it is in others. Additionally, consumers in different areas interpret what’s good or bad on the 11-point scale in different ways. Because of these variations, you can’t treat NPS identically across countries. If you do choose to use NPS as a global metric, you need to set region-specific goals based on the current competitive NPS in each marketplace and that location’s individual commitment to improvement.
For more information, see:
– Blog: Is NPS a Good Global CX Metric?
– Research: Calibrating NPS Across 18 Countries
– Data Snippet: Net Promoter® Score (NPS) Across Countries
– Video: What is Net Promoter Score (NPS)?
5. Should we use NPS to capture interaction feedback?
No. NPS is a measure for assessing people’s overarching attitudes or sentiments towards your organization, not a measure for understanding their perceptions of a single transaction or interaction. No matter how you configure the question, whether or not someone would recommend a company does not provide clear feedback about an individual interaction, like, say, whether a contact center call was easy or an online article was useful. Because this metric by its very nature reflects the sum of a person’s interactions with a company, it is best used as a relationship tracking mechanism – not a transactional-level metric.
For more information, see:
– Blog: Advice for Propelling Your Net Promoter Score Program
– Blog: CX Myth #4: Net Promoter Score Is the Best/Worst Metric
– Video: What is Net Promoter Score (NPS)?
– Launchpad: Building an XM Metrics Program
6. Should we tie compensation to CX metrics?
While compensating employees – usually in the form of a bonus – based on improvements to CX metrics can sometimes be an effective way to reward desired behaviors, a poorly designed compensation program will drive the wrong behaviors and negatively affect the experiences of both your employees and customers. If you do choose to compensate employees based on CX metrics, there are three principles to keep in mind: 1) Because CX is almost never the result of a single person, focus on team-level metrics and rewards that encourage employees to work together towards shared objectives, 2) To avoid turning CX metrics into a punishment for employees, design a program where there is little to no negative impact on compensation if the group doesn’t hit a goal, but there is a positive impact if they meet or exceed it, and 3) Don’t exclusively rely on compensation to inspire and recognize employees for doing the right things. Instead, tap into humans’ intrinsic motivators (meaning, control, progress, and competence) by rewarding and recognizing employees who demonstrate the positive actions you want others to adopt.
For more information, see:
– Report: Five Steps for Building a Strong CX Metrics Program
– Blog: CX Myth #6: Compensation Drives Good CX Behavior
– Blog: Stop Employees from Asking for Good Ratings
– Blog: How Do You Engage Employees? Adopt the Five I’s
7. How do we prevent employees from asking customers for good scores?
Employees ‘gaming the system’ by asking for good scores is a common problem. It happens when an organization puts more emphasis on achieving specific measurement outcomes than it does on achieving what should be the true goal of a CX program – consistently delivering experiences that meet the needs of its key audiences. When an organization overly focuses on a number (often by attaching strong incentives to individuals based on the data), it inadvertently builds a system that encourages counterproductive “gaming” behaviors, such as asking customers for good ratings. To discourage this type of behavior, we recommend not using compensation as a lever for driving employee behavior change. We also recommend implementing these five rules for all employees: 1) Don’t mention or refer to a score, 2) Don’t mention specific survey questions, 3) Don’t mention any consequences, 4) Don’t say or imply that you will see their responses, and 5) Don’t intimidate customers (or employees) in any way.
For more information, see:
– Blog: Stop Employees from Asking for Good Ratings
– Blog: 5 Rules to Stop Employees from Gaming Your Feedback System
EX Metrics
1. Which EX metric(s) should we anchor our EX program on (e.g., engagement, eNPS, etc.)?
Employee engagement is the most common and familiar metric used to anchor EX programs. However, while it is a well-researched and useful metric, engagement is not the right metric for every situation or survey. Organizations are now going beyond just engagement, introducing multiple outcome metrics into their suite of core EX KPIs. These standardized metrics often include intent to stay, well-being, inclusion, and overall employee experience. These are reliable metrics for organizations to anchor on as they are proven to represent unique elements of employee experience, to measure experiences throughout the employee lifecycle, and to track changes over time.
For more information, see:
– Blog: Five Areas for Modernizing Employee Experience Management . . . Right Now
– Research: 2024 Employee Experience Trends: Americas
2. Should you tie compensation to EX metrics?
Compensating managers and leaders for improvements in EX metrics can incentivize desired behaviors. However, use extreme caution if you go down this path as there are many examples of poorly executed bonus programs that end up rewarding the wrong behaviors and resulting in unintended negative consequences. Despite the many issues associated with doing it, there is often internal pressure to tie compensation to EX metrics. Here are five tips to help you navigate the risks: 1) Keep EX bonuses for executives only as managers cannot control many of the drivers of engagement, 2) To keep leaders focused on improving (not just reporting) scores, encourage them to ask the two ultimate questions for XM metrics – What have you learned? What improvements are you making? 3) To help managers and leaders take appropriate action, increase the frequency of employee listening to equip them with timely and relevant insights, 4) Reward managers for good EX behaviors that drive great outcomes, not just for improving the metrics, and 5) Celebrate great EX behaviors. Compensation is not the only way (nor the best way) to inspire and recognize employees for doing the right things.
For more information, see:
– Blog: Should You Tie Bonuses To Employee Experience Metrics?
Survey Methodology
1. Will the direction of response scales affect our survey results?
It can. Response scales can be oriented in several different ways. They can be either unipolar (answer choices all move in the same direction) or bipolar (scale has a neutral midpoint with positive answer choices on one side and negative ones on the other). They can also be either positively oriented (statements in the affirmative – like those indicating agreement, satisfaction, positive emotions, or high likelihood – come first) or negatively oriented (statements of dissent or unfavorable feelings – such as disagreement, dissatisfaction, or unwillingness to do something – appear first). Furthermore, scales can also be displayed horizontally or vertically. The default scale layout for horizontal scales tends to flow from negative on the left to positive responses on the right, while vertical scales tend to flow from positive on top to negative on the bottom. Which scale orientation you select will depend on factors such as the goal of your survey and the device respondents are using to complete the questionnaire. In terms of the impact of scale direction on survey responses, positively oriented scales tend to be easier for respondents to cognitively process, while negatively oriented scales may counteract potential response bias. Whichever orientation you choose, make sure it remains consistent across all questions within a survey and is expected and intuitive for the respondent.
For more information, see:
– Report: Best Practices For Designing Survey Questions
2. What is a “Likert scale?”
A Likert scale refers to a centrally weighted scale. This means that there are an equal number of choices on either side of a neutral midpoint. Likert scales are common and frequently expressed as a 5-point scale, with “undecided” as the middle option. However, any odd-numbered scale – 7-points, 9-points, etc. – could technically fall into this category as well. Regardless of the total number of points, this type of scale assumes equidistance between each point, and because it is centrally weighted, it gives the respondent an “out” at the midpoint to indicate neutrality (e.g., “I am neither satisfied nor dissatisfied”).
For more information, see:
– Report: Best Practices For Designing Survey Questions
3. Should we use 4-point scales for survey questions?
Usually no, except in very specific instances. This type of scale – even-numbered with no neutral midpoint – is called a “forced-choice” scale because it forces the respondent to choose one end of the spectrum or the other (e.g. must agree or disagree). Because this type of scale requires respondents to make tough calls or tradeoffs, it can sometimes be helpful if you are looking to uncover motivations or sentiments. However, the lack of a neutral midpoint may confuse your respondents or require them to select an answer that does not accurately reflect their opinion. An odd number of response options, on the other hand, provides a balanced scale with a neutral option, allowing you to cover the entire range of potential answers. Regardless of how many scale points you use, the most important thing is to avoid switching scales within the same survey.
For more information, see:
– Report: Best Practices For Designing Survey Questions
4. Should we ask open-ended survey questions?
Open-ended responses can provide some helpful context to quantitative data. However, these types of questions tend to be time-consuming and burdensome for the respondent, so use them judiciously. When you do include open-ended questions, whenever possible either place them at the end of the survey or embed them as branched items based on respondent answers to quantitative questions. This approach gives respondents the opportunity to provide specific feedback about certain aspects of their experiences, which may be missed by asking a generic open-ended feedback request at the end of the survey. In general, open-ended comment questions should make up no more than 10% of the total number of questions in your survey. All of that said, there are certain cases where open-ended text questions can decrease survey duration and minimize fatigue, such as allowing respondents to manually enter their age or date of birth rather than forcing them to scroll through a drop-down list. In those cases, using open-ended questions will actually improve the survey experience.
For more information, see:
– Report: Best Practices For Designing Survey Questions
5. What is a “double-barreled” survey question?
Asking double-barreled, or even triple-barreled, questions is a common survey mistake. This type of question appears to ask about one concept but actually asks about two, often by connecting them with an “and.” For instance, asking customers, “How satisfied are you with the agent’s knowledge and friendliness?” or asking employees, “How connected do you feel to our company’s mission and culture?” Because these questions ask respondents to evaluate two separate concepts within a single answer, if respondents feel differently about each concept, it is impossible for them to accurately answer the question. It’s also impossible for you to know which of the concepts they are evaluating. Therefore, to ensure valid, actionable results, separate distinct concepts into different questions. So in the employee example above, instead of asking about mission and culture, ask employees, “How connected do you feel to our company’s mission?” Then, separately ask, “How connected do you feel to our company’s culture?”
For more information, see:
– Report: Best Practices For Designing Survey Questions
6. Should we use 7-point or 5-point scales for survey responses?
It depends. To produce high reliability and consistency in your data, you need to provide respondents with enough answer choices to distinguish between different opinions. However, you also don’t want to provide so many answer choices that respondents with the same opinion are likely to select different options. So while including more scale points allows for higher differentiation and validity, including fewer scale points produces more reliability and consistency. The art of selecting the right point scale comes down to optimizing for both qualities. According to survey methodology research, the optimal number of scale points ranges from 5-9. Within that range, the number of response options you offer will depend on whether differentiation or reliability is more important as well as which method you use to field the survey. For example, due to smaller screen sizes, we recommend using a 5-point scale for surveys delivered through cell phones or tablets to avoid horizontal scrolling.
For more information, see:
– Report: Best Practices For Designing Survey Questions
7. How much survey sample is enough?
Sample size refers to the number of people who participate in a survey, study, or experiment. In most cases, you are unlikely to be able to get answers from every single person in the group you need insights about, so instead, you take a random sample of individuals who represent that population as a whole. For example, in a large universe – say, 10,000,000 people – you would only need a random sample of 385 people to get a confidence interval of 95% and a margin of error of +/- 5%. You could sample fewer individuals if you were comfortable with a larger range of error. This sample size calculator from Qualtrics allows you to input your desired confidence level, population size, and margin of error, and then it returns your ideal sample size. You must consider your sampling program carefully and ensure it is statistically sound. Many programs fail to gain traction internally if teams and individuals across the organization do not trust the sampling methodology used.
8. What’s a good response rate for my CX survey?
Response rates are contextual, meaning there’s no universal, one-size-fits-all answer to the question of what makes a “good” rate for a CX survey. The factors that affect response rates for any particular project are numerous and unique to that program, making it difficult to compare across companies and industries. Rather than focusing on achieving an exact percentage, concentrate instead on using four levers to boost your project or program’s response rate into a healthy range. These four levers are Mode (how customers get the survey), Sample (who receives the survey), Design (what the survey looks like), and Action (how you respond to the survey).
For more information, see:
– Blog: Looking to Improve Your Response Rates? Use These Four Levers
9. Is it okay to use incentives to improve our survey response rates?
Yes – incentives can help boost your survey response rates. There are two types of incentives you can tap into. One is “extrinsic” incentives, which can come in the form of a gift card, a discount on products and services, a unique loyalty benefit, or an entry into a sweepstake of some kind. The second is “intrinsic” incentives, which are not monetary in value but can still entice people to complete a survey. For example, you could offer customers who complete a survey a sneak peek of key findings or invite them to participate in co-creation sessions, pilot programs, or other exclusive events.
10. How should we distribute our surveys?
There are a number of potential distribution methods you can use to field your surveys, including Phone, IVR, email, SMS, QR codes, receipts, as well as a number of digital listening posts, such as active intercepts, passive listening, and in-app notifications. Here are a few recommendations for distributing your surveys in a way that will help you achieve higher response rates: 1) Match your mode to your target audience, using the timing and channels that meet them where they are (e.g., send business customers an email at 7 am on Tuesdays, and use SMS or in-app questionnaires for younger customers), 2) Test different distribution methods using pilot programs and adopt the method that works most effectively, 3) Ask customers to identify their preferred communication method and distribute surveys to them using that method, and 4) Distribute transactional surveys as close to – or embedded within – the interaction you’re asking about as possible (e.g. ask customers to stay on the line and answer a few questions following a support call or deploy an active intercept on an order confirmation page to request feedback about the online purchasing journey). In general, the main principle of survey distribution is that it should not be an “additional ask” of their time with no benefit for them.
For more information, see:
– Blog: Looking to Improve Your Response Rates? Use These Four Levers
– Blog: Tapping Into Five Types of Digital X-Data Collection Mechanisms
11. What is a good response rate for my EX survey?
A “good” response rate generally depends on a few factors, such as survey type, distribution method, frequency, historical participation rates, and organizational context. In general, an 80%+ response rate for an engagement survey that utilizes unique link distribution (i.e., one link per survey taker with targeted reminders) would be considered as a high response rate. You should expect lower response rates from frontline staff that are not in front of a digital device for long periods of time, such as teaching, caring, retail, manufacturing, and transportation. In some contexts, exceeding 95% may be too high as responding may have been forced, mandatory, or as a result of coercion. While smaller groups (e.g., 50 employees or less) often achieve 100% participation, if entire divisions or the organization as a whole has an extremely high response rate, it would be wise to dig deeper into how this occurred. An organizational culture of listening and responding to feedback plays a big role in encouraging response rates, so if organizations do not have a strong track record of responding to employee feedback, a response rate between 65 and 75% might be more realistic. As organizations make an active effort to build employee trust by sharing results promptly after survey close and communicating actions that will be taken, response rates should increase over time.
For lifecycle surveys, response rates can be as low as 30-40% depending on the type of survey. In general, Onboarding surveys typically have higher responses compared to Exit or Candidate surveys. In order to maximize participation in these surveys, organizations should consider where each lifecycle survey fits into their preexisting workflow and trigger the survey immediately after the event or experience that they’re measuring. Response rates are often higher when the request for feedback comes while the experience is still ‘fresh’ in employees’ minds and feels like it’s part of a preexisting process rather than as an afterthought.
12. What is a typical abandon rate (i.e., surveys that were started but not completed) for EX surveys?
Survey abandon rates typically fall within a range of 5% to 20%, depending on the overall response rate. The reasons for survey drop-off are usually benign, such as an employee being distracted or running out of time. However, look for patterns in where people drop off the survey – if there is consistency in where people stop completing the survey, there may be some issue with the content or challenges completing that section that are worth investigating. According to research, drop-off rates increase after 5-7 minutes of taking a survey on a mobile device and around 7-9 minutes on a large device like a computer.
It is important to note that the drop-off rates are higher for open-text questions. These questions often take more effort to formulate a response, so people often skip or plan to come back to them later with a considered response, which is why it’s recommended that open text questions be optional. To reduce abandonment rates, we recommend keeping surveys relatively short (30-45 items max) and using targeted reminders for participants who have started the survey and stopped mid-way.
For more information, see:
– Report: Best Practices For Designing Survey Questions
13. What is the difference between anonymous and confidential employee surveys?
Confidential surveys are linked to employee personally identifiable information (PII), but this is restricted from managers and business leaders, who cannot identify individual responses. This allows the organization to tie survey results to employee demographic information (e.g., team, function, tenure, gender, etc.). Confidentiality and employee privacy is enforced at the reporting layer, such as dashboards that aggregate the data and set minimum thresholds to view insights. In these projects, identifying information like name, employee ID, or email are not able to be accessed in reports or dashboards, but access to individual data can be viewed by a select group of HR professionals or project administrators that have privacy and security approval.
As the name implies, anonymous surveys are completely anonymous, meaning that no PII is linked to the survey response. No one, not even HR or the survey administrator, can link responses to an individual unless the subject discloses PII in their survey response, as these survey responses are not linked to individual employee data files. This approach can be useful in cases where not all survey participants can be known in advance or where certain regulations or circumstances dictate complete anonymity.
14. What is a good confidentiality threshold for my EX survey?
Industry best practices indicate that the lowest threshold for confidentiality is 5 respondents. Thresholds lower than 5 are likely to compromise the confidentiality of survey responses. However, depending on an organization’s structure and historical context with employee surveys, minimum response thresholds usually range between 5 and 20. While this rule applies to most types of employee surveys (e.g., engagement survey, pulse, lifecycle, etc.), multi-rater/360 surveys are an exception because of small samples per response group (e.g., peer, direct report, manager). Although multi-rater/ 360 surveys are generally confidential (not anonymous), reports do not indicate who gave which response. As such, we recommend a reporting threshold of 3 evaluators per response group, with the exception of the subject’s manager.
Voice of the Customer
1. What are alternatives (other than surveys) to listen for experience feedback?
Today a large proportion of the experience data (X-data) organizations collect is solicited through formal surveys. However, technologies that are capable of capturing unstructured and unsolicited information about people’s experiences – like speech analytics, text analytics, sentiment analysis, video analytics, website analytics, etc. – are improving at a rapid rate. For example, you could capture customer or employee sentiments by scraping comments from social media sites, third-party rating sites, or internal intranet systems and then using text analytics to understand how they are feeling about their experiences. You could apply speech analytics to contact center/help desk interactions to understand if someone had a good or bad experience based on their language, pitch, and volume. Technologies like these allow organizations to produce X-data that can be collected and analyzed at scale, much like operational data is today, and can therefore provide significant information about how people are thinking, feeling, and behaving without ever sending them a survey.
For more information, see:
– Launchpad: Driving Insights with X- and O-Data
– Blog: Take Employee Listening to the Next Level with Unstructured Listening
– Report from Walker: Deliver More Value with X- and O-Data: A CX leader’s guide to integrating X- and O-data (By Walker)
2. What is the best timeframe for closing the loop with customers?
Swiftly responding to customer feedback can put customers back onto their journey quickly and eliminate the need for them to recontact your organization for help. Historically the standard has been to take action within one week, but for modern CX programs, we recommend aiming to have your first contact with the customer within 24 hours – sooner if possible. To help you manage cases quickly, set up an automatic trigger to prioritize all active identified cases and assign them to an employee who is accountable and empowered to take action. Once customer contact has been established, assign a brisk timeline for successful case completion, setting more specific goals based on the journey type and customer segment.
For more information, see:
– Blog: Five Guiding Principles of Customer Recovery and Closing the Loop
3. How often should we ask customers for interaction feedback?
You want to instrument your listening program so that you’re collecting interaction feedback from the right customers (i.e. the ones who are most important from a business and strategy perspective) at the right time (i.e. after the moments that have the most significant impact on their loyalty – as identified through journey maps and/or analytics on X- and O-data). For a CX effort in the early stages of maturity, we recommend limiting the number of touchpoint surveys you send to only those few interactions that have the most disproportionate impact on customers’ key loyalty behaviors (see data around this for 22 industries). Heavily invest in getting responses to those surveys, taking action, and tracking results. Then, as your CX program resources increase, plan to attach more touch-point surveys to key moments along key customer journeys. However, before adding more surveys, make sure you have the capacity to close the loop with all the customers who provide feedback (even if it’s just a “thank you” note) and plan to pull back if you see response rates dropping because customers feel over-surveyed.
For more information, see:
– Report: The Customer Journeys that Matter Most
– Blog: Looking to Improve Your Response Rates? Use These Four Levers
4. How often should we ask customers for relational feedback?
CX programs have traditionally sent every customer a survey asking for relationship feedback on an arbitrary date once or twice a year. However, in our modern, always-on world, this periodic cadence leads to long periods with no incoming relationship health information and prevents organizations from identifying and fixing poor experiences in a timely manner, often leading to unhappy, lapsed, or lost customers. One core element of a modern relationship measurement program is the practice of soliciting relationship feedback on an ongoing basis and during the moments that are relevant to an individual customer (e.g. the anniversary of their first purchase or right before their subscription or contract renewal date). To trigger these relationship assessments automatically at the right time, use O-data like customer tenure, interaction history, and customer type. Deploying relationship surveys in this way will not only generate higher-quality insights as you’ll be engaging with customers at a time that is relevant to them, but it will also enable you to respond to relationship health issues in a more timely fashion and foster an organizational culture of ongoing listening.
For more information, see:
– Blog: Five Shifts for Building a Dynamic, Always-On Relationship Program
– Blog: It’s Time to Update Your Relationship Measurement Program
5. How can we avoid being seen as “the bearer of bad news” every time we share poor CX survey results with other groups?
One of the CX mistakes we often caution organizations to avoid is, “Don’t forget to celebrate success.” One core function of any CX program is, of course, to measure and improve customers’ experiences, which inevitably means identifying pain points or breakdowns. However, that should not come at the expense of another core CX function: customer-centric culture change. Don’t spend so much energy focusing on what needs to be improved that you forget to appreciate and highlight the progress that has already been made. Here are five tips for celebrating CX successes and avoiding organizational burnout and resentment: 1) Institutionalize success-seeking by regularly including “signs of success” on your meeting agendas so people get in the habit of thinking about the progress they’ve already made, 2) Acknowledge great CX work by teams and individuals – and not just “big bang” successes, but the successes generated by consistently doing the little things right, 3) Regularly communicate successes – in the form of stories, survey verbatims, testimonials, etc. – in presentations to senior leaders, internal communications, and in your own visits with other teams, 4) Explicitly and publicly thank employees for their work to motivate others to join the cause, and 5) Create customer experience awards to recognize the teams and individuals who are making a difference. In addition to celebrating successes, we also recommend that you explicitly communicate that the CX insights you’re sharing are not meant to make anyone “look bad” but are instead aimed at helping people achieve the goals and measurements they are accountable for. The more you can demonstrate how the insights you are sharing influence the things individuals and teams care about, the more open they will be to hearing what you have to say…whether the news is good or bad.
For more information, see:
– Blog: CX Mistake #8: Forgetting to Celebrate Success
– eBook: The Six Laws of Customer Experience
– Research: Creating and Sustaining a Customer-Centric Culture
6. How can we promote the idea of a centralized survey program to other departments?
While a centralized survey program offers some obvious advantages, individual departments can see this consolidation as a threat to their autonomy and way of working. They will often push to define and procure their own system, arguing that ceding control to a centralized Voice of the Customer team will result in poorer data and a system that fails to meet their exacting needs. To overcome these arguments, we recommend a CX team articulate the advantages of having a centralized program for both the organization and for each individual department. Benefits to the organization often include reduced procurement costs as well as a more holistic view of customer journeys – which is critical for understanding journeys that span multiple departments. This results in a better understanding of cross-functional drivers of satisfaction, increased collaboration across teams and departments, and improved ability to fix underlying, systemic issues. Benefits to the individual departments, gained as they move from insights “owners” to insights “customers,” include reduced labor and maintenance costs, richer insights that incorporate data from other departments, and improved coordination with groups owning upstream and downstream experiences. To build a convincing case for centralization, we suggest that you: 1) Spend time developing a detailed understanding of each department’s requirements for a survey system, 2) Include each department in the procurement and set-up activities to ensure their buy-in, 3) Work to ensure that the new system produces meaningful benefits that help them realize their broader departmental goals, and 4) Visibly make departmental goals the goals of the central survey system.
7. What is non-survey listening?
Non-survey listening uses proactive systems – such as text/voice analytics, social monitoring, and sentiment analysis – to harvest unstructured data from a variety of sources, such as social media, contact center conversations, chat sessions, or 3rd-party review sites. This decreases the reliance on traditional surveys, which – because they rely on customers noticing or actively seeking out the survey initiation method – are completed by only a fraction of the organization’s total customer base and often skew negative. As the volume of this unstructured data explodes, non-survey listening is becoming increasingly central to organizations’ customer listening efforts as it is capable of surfacing unstructured feedback that lives organically across multiple channels and platforms. To harvest actionable insights from this information, organizations need to build four capabilities: 1) The ability to recognize keywords within content, 2) The ability to understand the context within which those keywords are written or spoken and translate them into concise topics, 3) The ability to apply a sentiment rating to the content to understand a customer’s emotions, and 4) The ability to define the intent of a customer (e.g., likelihood of switching to a competitor or accept an upsell offer). While non-survey listening cannot answer direct questions your organization may have or generate traditional, pre-determined metrics like CSAT or NPS, provides CX teams with a number of advantages, such as the ability to listen across many different channels (even ones your organization doesn’t own, like social media or rating sites), close the loop with customers more quickly and concisely, better identify macro trends, and capture insights without the need of survey construction or maintenance.
For more information, see:
– Blog: Conversational Analytics Are Transforming Contact Centers
– Blog: 4 Ways to Modernize Your X-Data Systems
– Launchpad: Driving Insights with X- and O-Data
– Launchpad: The Fundamentals of Digital Experience Management
Employee Listening
1. Which questions should we include in an employee engagement survey?
While there are standardized models and guidelines for a modern employee engagement survey, such as Qualtrics’ EX25, which exact KPIs you include in your engagement survey should depend on your organization’s specific strategic vision for employee experience. Here are the general categories of questions you should plan on including 1) KPIs or outcome metrics, which should be selected based on your particular strategic EX vision, 2) Drivers of EX outcomes, which measure the critical experiences that influence the KPIs and incorporate the aspects of work that are important to employees today. Understanding each driver provides critical signals for how to prioritize EX improvements, 3) Any important company or business unit-specific items, and 4) Open-ended questions, which are critical for providing color and depth to the quantitative insights. These open-ended questions are a great catch-all for employees to share their feedback, and we recommend using text analytics technology to uncover sentiment and themes. Although the exact questions you choose to include will depend on your organization’s EX vision, a modern employee engagement survey typically incorporates EX metrics like inclusion, wellbeing, intent to stay, and overall experience satisfaction – all of which are important and unique indicators of employee experience.
For more information, see:
– Blog: Five Areas for Modernizing Employee Experience Management… Right Now
2. What are the key elements of an employee pulse survey?
EX pulses typically include a combination of four elements: 1) Items that calculate EX outcome metrics like engagement, inclusion, intent to stay, and wellbeing (should comprise about 15% of the study), 2) Top signals or driver items that impact outcome metrics, which could include things like collaboration, communication, or innovation (should comprise about 70% of the study), 3) Company or business unit specific items, such as living the values or organizational change (should comprise about 10% of the study), and 4) Open-text questions that provide text response options, like prompting employees to share suggestions for improving organizational experience (should comprise about 5% of the study).
3. How frequently should we do an employee pulse survey?
As a general rule, the more often you pulse, the shorter the survey should be. Annual engagement census surveys tend to be the longest EX surveys. At 50-80 items, you can use these to set a baseline for all outcomes and drivers. Then, in shorter, more frequent pulses, EX KPI questions can be repeated and driver items rotated through the different surveys fielded throughout the year. As a general rule, twice-yearly pulses should be about 40 items and quarterly pulses about 25 items. We don’t recommend fielding pulse engagement surveys more frequently than quarterly as you need to allow time in between pulses to take action on the feedback you’ve collected.
4. How long should our employee engagement surveys be?
This partly depends on the frequency of your employee pulse surveys. Generally speaking, twice-yearly pulses should be about 40 items and quarterly pulses about 25 items. Here are three tips for getting the most out of your engagement survey real estate: 1) Use a standardized model. This will help ensure the items are selected based on statistical modeling, so there is reliability and validity in what you are measuring, 2) Define the strategic purpose of the survey. Most often, the audience for engagement survey insights is the frontline managers and senior leaders who take action to improve employee experiences. Be clear about the purpose of the survey and only include items that align with that objective. An engagement survey is not the only opportunity to collect employee insights, so be sure you’re using the right listening post at the right time. 3) Keep open-text items to no more than 5% of the study. Plan to ask broad questions (e.g., prompting employees to share suggestions for improving organizational experience), and use text analysis technology to uncover key themes and sentiments.
5. How should we balance concerns about survey fatigue with the need to listen to employees more frequently?
There’s a common misconception that survey fatigue is only caused by the amount of time an employee spends completing a survey. While this may sometimes be the case, there are often other factors producing this sense of “survey fatigue.” First is employees’ perceptions that taking the survey is not a good use of their time if the organization will ultimately just ignore their feedback. Research shows that when employees expect their feedback to be disregarded, it negatively impacts their experience and they will be less inclined to participate in surveys. To combat this perception, clearly communicate the value of employee insights and provide regular examples of how the organization listens to and acts on employee perspectives. Another common source of concern about survey fatigue comes from managers, leaders, and HR professionals who feel overwhelmed at the idea of having more data to analyze and greater expectations for action. One way to avoid this worry is to shift from a traditional EX program – with repetitive reporting and only periodic improvements – to a more modern approach to EX management, which distributes highly tailored insights to the right people at the right time and provides an easy transition towards action.
For more information, see:
– Blog: Five Areas for Modernizing Employee Experience Management… Right Now
6. Who is responsible for taking action on employee feedback?
Responding to and acting on employee feedback (or “closing the loop”) is not the responsibility of a single individual or team, but rather involves people and groups from across the organization. Closing the loop on employee feedback typically happens at three levels: 1) Outer Loop Process Changes. These are the systemic changes to processes, systems, and company culture that are driven by employee feedback. Because these types of changes require strategic design to improve experiences across the organization (e.g., company-wide changes to hiring, promotion, or performance processes), they tend to be led by senior leaders, operations teams, and HR teams, 2) Inner Loop Feedback Responses. Acting on employee feedback also requires tactical changes that may be implemented within teams or between a manager and a direct report (e.g., team-based debrief of the EX insights or collaborative action planning). Due to the nature of these actions, they tend to be led by managers, teams, or individuals, 3) Closed-loop Feedback Practices. After an employee provides feedback on their experience with an internal service provider, such as the IT helpdesk or facilities, an alert is automatically triggered and someone from that group follows up directly with that employee to address their feedback and fix the problem. This practice – which was adopted from CX management – has become increasingly common in EX.
7. What is the best practice for including/excluding partial responses in reporting?
There is not a one-size-fits-all answer to this question, but here are three common approaches: 1) Include any partially completed survey as long as there is at least one response. This approach captures the most number of responses. However, this can lead to confusion during reporting due to confidentiality thresholds. For example, a dashboard shows 10 respondents, but no individual item has more than 8 responses due to skipped questions or partially completed surveys. 2) Include partially completed surveys that meet a certain threshold You may decide that there is a point in the survey that is adequate for a response, such as getting past the first section or responding to a proportion of the items. This approach is most commonly adopted, as it is the middle ground to include a high number of responses and reduce the confusion created by option one above. 3) Only include surveys where the participant has actively clicked “submit” at the end of the survey. This is the most stringent approach. It will reflect the lowest number of total responses as partially completed surveys will not be included in the final count.
While each of the above approaches has pros and cons, legal and/or compliance considerations should also be taken into account when determining the approach that best suits your organization’s needs.
8. What are the best open-ended questions to ask in an employee survey?
Open-ended questions can elicit comments that are a source of rich information that add color and context to quantitative responses. To get the most value from open-ended questions, one best practice is to embed them as follow-ups to quantitative items. This provides respondents with the opportunity to give specific feedback about certain aspects of employee experiences that may be missed in a general question (e.g., “What do you like most about working at this company?”) at the end of the survey. Another best practice is to avoid formal or unnatural language by using a conversational tone. Instead of, “Tell us why you feel this way,” try, “We are really sorry to hear this. We want to get better! Can you give us more detail about this experience?”
When prompting respondents for open-ended comments, it is important to include a privacy statement in the survey that clearly explains how verbatim responses will be used. We recommend including language that reminds participants to not include their names or any specific information that may identify them if their responses will be shared directly with their managers.
For more information, see:
– Report: Best Practices For Designing Survey Questions
9. What is the best way to share open-ended employee comments with leaders?
While verbatim responses are great at providing context to quantitative results, they can sometimes overpower other messages when not properly prepared. Open-ended questions can be deployed in a survey a number of different ways and for different purposes, so a standard rule on how to share them is… it depends.
In the survey design process, be clear on how the open-ended comments will be used and distributed, ensuring that respondents are fully informed before they write their answers. Before releasing employee verbatim results, check for privacy (appropriate use of the text box or sharing identifiable information) and prepare end-users with some tools to ensure they utilize them appropriately and gain value from them. To analyze open-ended comments, we recommend Topic and Sentiment Analysis. Results from these analyses will provide directions for next steps in terms of where and how it would be appropriate to dive deeper into the written comments. Word clouds can also be a simple way to draw a leader’s attention to high frequency words and themes. Looking at topics and frequencies first before reading the detailed comments reduces the likelihood of users getting stuck on a small set of extreme comments.
10. Should we take action on survey themes/categories or specific survey items?
Due to the breadth and variance of items that make up survey themes or categories (e.g., Communication, Collaboration, Growth and Development), we do not recommend action planning at this level. Instead, use each survey item as a source of actionable insight to drive action. Employee surveys are usually made up of EX outcomes items (such as engagement, wellbeing, or intent to stay) and driver items under each category or theme. Use item-level driver analysis to prioritize taking actions that have the strongest impact on the EX outcomes. For more information on driver items and action planning, take a look at the Qualtrics EX25 solution.
For more information, see:
– Blog: The ABCs of Employee Experience Action Planning and Six Roadblocks to Avoid
11. During a reorg., should we maintain our action plans, passing them from the old manager to the new manager?
Action plans are somewhat akin to diet and exercise goals in that if you’re not the one making the commitment for yourself, it’s highly unlikely that you’ll be making a behavioral change. Similarly, behaviors are more likely to change when the person or team taking action are the same ones who developed the plan. If the previous manager did not complete the previous action plans, there is a chance that those action plans would no longer be relevant after the reorg. If the action plans were not completed but are still relevant, it would be better practice for the team to huddle and decide how to ‘stop, start, and continue’ with their actions in the new landscape with their new manager.
For more information, see:
– Blog: The ABCs of Employee Experience Action Planning and Six Roadblocks to Avoid
12. How do we determine the “moments that matter” when developing a lifecycle program?
The best way to determine moments that matter when developing a lifecycle program is by conducting a journey mapping exercise. The specific goal of journey mapping is to help organizations identify ‘moments that matter’ based on the types of experiences their employees go through and ideal experiences they’re looking to create. One way to conceptualize this would be to organize experiences into 3 categories: 1) Universal experiences that everyone in the organization goes through, such as Candidate, Onboarding, Exit, etc., 2) Highly personal experiences, such as returning from parental leave, role changes, etc., and 3) Micro/digital experience surveys, such as contacting IT, ease of navigating internal job sites, etc.
Consider the impact of each of these moments across different employee segments, ensuring you apply a lens of diversity, equity and inclusion when determining which moments matter the most.
For more information, see:
– Blog: Using Journey Maps to Define Listening Posts
13. Who should have access to lifecycle dashboards?
Lifecycle dashboards are mostly relevant for the teams that own the processes or are responsible for improving that employee experience. As a general rule, those who are responsible for taking action on the results of the survey should have access to the results. For example, Onboarding experience dashboards are accessed by enablement teams, Candidate experiences are accessed by talent acquisition teams, and so on. Often IT, HR, People Analytics, or brand administrators will also have access to the dashboard for the purpose of maintenance and implementation. In some cases, people managers and/or senior leaders may have access to dashboards, but it is most common for the functions responsible for that process to share relevant insights with these stakeholders as needed.
Experience Design
1. What is a “persona?”
A persona refers to a vivid description of a prototypical person within a specific segment. It is usually captured in a one- or two-page document that showcases the relevant characteristics of a typical person within that persona segment, such as demographic information, goals, character traits, attitudes, behavioral patterns, etc. Personas are a powerful XM tool as they create a common, unified understanding of customers or employees within a specific audience, which in turn helps organizations build internal empathy and alignment, design experiences for specific groups, capture and communicate their research findings, and bring their traditional segmentation models to life.
For more information, see:
– Report: Five Phases for Creating a Powerful Persona
– Template: XM Persona Documentation
2. What is the best way to develop personas?
The exact path you take to build a persona will depend on a number of different factors, including the needs and goals of your project, how widely (if at all) the personas will be shared, and whether personas already exist in some form across the business. However, we have identified five phases that teams generally flow through as they develop personas: 1) Preparation, where you define an overarching vision and strategy for the effort, articulating why it’s a worthwhile endeavor and how you expect to achieve your objectives, 2) Research, where you tap into a wide variety of different data types – most importantly, in-depth qualitative research – to develop a deep understanding of the behaviors and needs of the people within your target groups, 3) Analysis, where you translate the raw data you’ve collected into three-dimensional archetypes by organizing and analyzing your findings to uncover underlying patterns and relationships, 4) Creation, where you use those patterns you surfaced to generate robust persona documents that effectively communicate the significant attributes of your target audience in a way that’s easy for the people using these artifacts to internalize and apply, and 5) Deployment, where you put those personas into action, using them to inform the design and development of products, services, and experiences across the organization.
For more information, see:
– Report: Five Phases for Creating a Powerful Persona
– Template: XM Persona Documentation
3. What is the relationship between personas and traditional marketing segments?
Most organizations have traditional customer or employee segmentations that are built around demographic information – such as age, income, and gender – and are often designed for marketing or sales purposes. However, because such attributes are not usually the defining characteristics that shape people’s perceptions of their experiences, these traditional, demographically based segmentations and models are not the most effective tool for designing and improving experiences. Personas, on the other hand, are purpose-built for creating tailored, stand-out experiences. Because they are based on behavioral segmentation and include information like goals, attitudes, skills, motivations, values, habits, fears, and personality, personas are an excellent tool for understanding how people are likely to interpret and respond to experiences. Organizations can (and should!) use their traditional segmentation models to target the right groups of customers or employees by conducting ethnographic interviews with people from each segment and then developing personas based on common behavioral patterns. While these more traditional segments can help validate and enrich the personas, there is not a one-to-one correlation between the two.
For more information, see:
– Report: Five Phases for Creating a Powerful Persona
4. What is a journey map?
A journey map refers to a visual representation of the steps and emotional states that a person goes through over a period of time to accomplish a specific goal, which may include some interactions with your organization. These popular tools help organizations identify how an individual (e.g. customer, employee, partner) views their experiences with the company by putting their interactions within the context of that individual’s broader goals, objectives, and activities. Here are five steps you could follow to build a journey map: 1) Define a persona or target user, 2) Select a specific journey for that target user. This can be anything from an event that occurs over an extended period of time – like buying a car or onboarding for a new job – to a contained action – like making a car loan payment or setting up a new laptop, 3) Develop a draft journey map based on research. Typically, cross-functional teams will collaborate to produce a high-level outline of the key stages and interactions in the target user’s journey, including specific details, the user’s expectations, and emotional state, and issues or obstacles within each stage, 4) Highlight and prioritize moments of truth to determine which areas along the journey you should disproportionately invest in improving, and 5) Validate the draft journey map with real users and stakeholders. Don’t forget that the map is just a tool, not an end in and of itself. Once the map has been finalized, you need to use it to drive meaningful experience improvements.
For more information, see:
– Video: What is Journey Mapping?
5. How should we use our journey maps to inform listening post design?
Because journey maps outline the steps people follow to accomplish a specific goal, they are a valuable resource for a variety of key XM activities, including pinpointing where the organization should set up listening posts for maximum impact. To use journey maps to instrument your listening program, we recommend considering the four W’s: Why, When, Who, and What. 1) Why should we listen? Bring operational data (O-Data) – like tenure, purchase history, or demographic information – together with experience data (X-Data) – like effort, emotion, or satisfaction – in your journey map. Looking at X- and O-data alongside one another will help you assess the business value of improving an experience and prioritize which actions to take once your listening posts are in place, 2) When do the most important interactions happen? Because journey maps show the steps people take on the path to their goal, they are useful for identifying the key interaction points (moments of truth). You want to establish listening posts at these key interaction points in order to capture insights at the most emotionally intense and consequential moments of an experience. 3) Who are the most important users? Use journey maps to help ensure you are collecting feedback from the right mix of people – the ones who represent the most important groups having experiences associated with that journey, 4) What improvements should we prioritize? Look at how listening posts link to key business objectives and what you need to know about people’s perceptions at different steps in the journey. This information should help you decide which questions to ask at each listening post.
For more information, see:
– Blog: Using Journey Maps to Define Listening Posts
– Report: Maximizing Value from Customer Journey Mapping
6. How should we use our journey maps to change our company culture?
Despite the – often significant – amount of time and resources organizations invest in developing journey maps, these efforts often fall short of their full potential because organizations view the journey maps solely as a means of envisioning and measuring people’s experiences – not as a tool for driving action across the organization. When used correctly, journey maps can be a valuable tool for fostering a more experience-centric culture. To do this, we recommend leveraging them in five ways: 1) Find and fix problems by using journey maps to identify pain points and close experience gaps across the organization, 2) Design innovative experiences by uncovering and fulfilling unmet customer or employee needs, 3) Create strategic alignment by linking experience management efforts to desired business outcomes, 4) Refine your XM listening program by focusing insights and metrics on key moments of truth, 5) Instill XM-centric mindsets and behaviors in leaders and employees across your organization.
For more information, see:
– XM Deep Dive: Neighborhood Health Plan of Rhode Island Drives Culture Change with Journey Maps
– Blog: Driving Action From Journey Maps
Governance
1. What is the best way to set up internal CX program governance?
Customer experience transformation requires organizations to maintain a systematic focus on making changes over multiple years and across a number of different projects and teams. To coordinate all these various efforts, organizations need to establish governance structures that provide the appropriate decision-making, alignment, accountability, and conflict resolution. A well-executed governance model usually is often made up of five elements: 1) CX core team. The centralized team that sets the direction and sustains CX efforts across the organization, 2) Executive sponsor. The primary advisor and reporting executive who engages with peers at the executive level on behalf of the CX core team to build buy-in and support for the CX strategy, 3) Steering committee. The group is made up of decision-makers from critical company functions who come together to shape and approve CX strategy, 4) Working group. The set of influential managers from across the company who come together to lend their expertise and effort to the CX core team in order to move CX initiatives forward, 5) CX Ambassadors. Employees from all levels of the organization provide input and help move the CX strategy and action plan forward. When structured and leveraged effectively, a CX governance model with elements like these will help ignite the sustained momentum needed to help overcome the inertia that can stall large-scale change efforts.
For more information, see:
– Blog: The Five Essential Elements of CX Program Governance
– Webinar: Mastering Governance – A Key XM Leadership Skill
– Tool: Responsibilities of a CX Core Team: Strengths and Gaps
– Blog: Five Elements of Successful XM Ambassador Programs
– Blog: The Three Core Functions of a CX Center of Excellence
2. How centralized should our XM program be?
It depends. As organizations mature and expand their XM efforts, their governance model changes as well, often evolving from a centralized governance structure to a federated one. Initially, XM Programs (including CX, EX, BX, and PX) organize themselves by creating a centralized team dedicated to developing and implementing relevant best practices across the organization. However, while these centralized teams can be efficient at building internal capabilities and influencing change, their limited scope and reach often hampers efforts to embed ongoing, sustainable practices and behaviors across the organization. Consequently, as an XM program matures, we tend to see organizations shift to a federated XM model, which makes it easier to coordinate a distributed set of capabilities and tailor best practices to individual parts of the business. This evolution from a centralized to federated approach is built upon three components: centers of excellence, enterprise coordination, and distributed skills and mindsets. These components become more developed and prominent as the governance model evolves from a centrally driven model to a federated model of XM management.
For more information, see:
– Blog: How XM Evolves to a Federated Governance Model
– Report: The Federated Customer Experience Model
– Tool: Responsibilities of a CX Core Team: Strengths and Gaps
– Tool: Responsibilities of an EX Core Team: Strengths and Gaps
3. What does an XM Center of Excellence look like?
Organizations must maintain a strong set of capabilities in certain key areas. Rather than having all of these capabilities reside solely in a centralized team, organizations can develop centers of excellence (COEs), which refer to units or teams of experts inside an organization with a common focus (e.g. XM, CX, EX) who work with leadership to set the strategic direction of a program, define a standardized set of methodologies, tools, and approaches, manage the portfolio of relevant activities, and share best practices across the organization. Employees who are a part of a COE maintain strong specialized knowledge and skills that they share across lines of business. For CX, common COEs include deep analytics, reporting and data visualization, organizational development, experience design, process improvement, and culture change management.
For more information, see:
– Report: The Federated Customer Experience Model
– Tool: Responsibilities of a CX Core Team: Strengths and Gaps
– Tool: Responsibilities of an EX Core Team: Strengths and Gaps
4. How many people do I need on my core XM team?
There is no “right” number of people you need to successfully derive value from your XM program. It depends on the size of your organization, the commitment of your executives, and where you are in your journey towards XM maturity. XM programs tend to be small early on, then grow as the organization’s commitment to XM grows, and then shrink in number again when XM capabilities become embedded across all areas of the business and the organization moves from a centrally driven model to a more federated one. For reference, when we asked 411 XM professionals how many full-time employees they currently have dedicated to XM, we found that most (46%) have between 1 and 5 full-time employees, while 16% said they have more than 100. Everyone else’s XM team headcount fell somewhere in the middle.
For more information, see:
– Report: State of the XM Profession
– Report: The Federated Customer Experience Model
Culture Change
1. How can I convince my executives to be more involved in XM?
We recommend engaging executives in a dialogue about things they already care about and showing them how XM can be a valuable capability for achieving their goals. Aim for a discussion that flows through these stages: Setup, Inspire, Reinforce, Elaborate, Nurture. For Setup, focus on communicating to executives why they should care about XM – not because it’s inherently good, but because customer and employee expectations and demands are rising, which is making change an ongoing reality. Next, to Inspire them, explain how XM creates organizational agility, allowing companies to more quickly sense and respond to these changes across all areas of the business. To Reinforce this point, share success stories around what other organizations are doing and industry data on the ROI of XM. Once you’ve piqued their interest, Elaborate on the path to XM maturity, which is built on the three foundational elements of the XM Operating Framework: Technology, Competency, and Culture. And then finally Nurture their interest and understanding by sharing compelling, relevant XM content. Ultimately, the goal is to convince executives that XM will allow them to create the organizational agility that is essential for thriving in an unpredictable future.
For more information, see:
– Blog: How Do You Explain Experience Management to Senior Executives?
2. How do I conduct a maturity assessment?
Organizations don’t become customer- or employee-centric overnight. Conducting maturity assessments periodically will provide insights into the strengths and weaknesses of your XM program (including XM, CX, EX, and Digital CX), so you can track progress towards your XM goals and evolve your plans as needed over time. There is no single way to conduct a maturity assessment, but here are some recommended steps: 1) Rather than doing the assessment alone or just with your core XM team, invite multiple key stakeholders who have direct involvement in your XM program (e.g. relevant business leaders, users, and project team members) to participate, 2) Prior to taking the assessment, coach participants on XM basics so everyone is coming into the assessment with an understanding of what XM is and how the assessment results will be used, 3) Have each participant take the assessment themselves and then, as a group, discuss the results – identifying areas where you are currently strong, where there are capabilities gaps, and where there are differences in people’s assessments, 4) Identify meaningful improvements you want to make to advance towards your program and business goals (our tool on prioritizing CX or EX improvements is particularly helpful in this step), 5) Translate those improvement areas into a program roadmap outlining what projects and initiatives you plan to focus on over the next 6, 12, and 24 months to achieve those goals, and 6) Repeat the assessment and update your roadmap on a defined cadence; we suggest every 12-18 months.
For more information, see:
– Blog: Five Tips for Using Our XM Maturity Assessments
– Launchpad: Maturing Your XM Program
– Tool: Customer Experience Maturity: Assessment
– Tool: Employee Experience Maturity: Assessment
– Tool: Digital CX Maturity: Assessment
– Tool: Experience Management Maturity: Assessment
– Tool: Assessment: XM Ambition
– Tool: Assessment: XM-Centric Culture
3. How should I approach change management?
Successful XM transformation requires a systematic focus on making changes over multiple years and across a number of different projects and teams. But XM change isn’t easy; it requires significant transformation across almost every aspect of a business’s operations, including people, processes, and technology. One approach to the change management required for XM transformation is “Employee-Engaging Transformation.” This approach focuses on aligning employee attitudes and behaviors with the organization’s desire for change. To succeed with this approach, organizations must incorporate five practices into their transformation efforts: 1) Connect employees with the vision by clearly defining and conveying, not just what the future state is, but why moving away from the current state is imperative for the organization, its employees, and its customers, 2) Ensure leaders recognize the role they play in the transformation and commit to attacking ongoing obstacles and working together until the organization has fully embedded the transformation into its systems and processes, 3) Enlist key influencers – especially middle managers – to evangelize and support the transformation with their reports, 4) Empower employees to change by first inviting them to help shape the transformation and then equipping them with the tools, training, and coaching they need to implement the necessary changes, and 5) Share impactful, informative messages through a variety of different channels and in a way that balances both practical and inspirational elements for each target audience.
For more information, see:
– Tool: Leading XM-Centric Culture Change: Strengths and Gaps
– Report: Introducing Employee-Engaging Transformation
– Tool: Effective XM Communication Plans: Strengths and Gaps
4. How should we combine our CX and EX efforts?
As the employee engagement virtuous cycle shows, there is an inextricable link between employee experience and customer experience; EX sustains great CX. To help organizations think through how to combine their EX and CX efforts, XM Institute has identified four ways to align these key XM components: 1) Visualize, which is about presenting senior leaders with actionable EX and CX insights alongside one another, inherently communicating that both are important to the business and should inform strategic decision-making, 2) Analyze, which is about investigating the statistical links – usually through longitudinal studies – to understand how EX metrics, like employee engagement, predict current or future CX metrics, like NPS or CSAT, 3) Involve, which is about introducing ways in which employees can more directly impact CX results, including soliciting CX-relevant feedback from employees as well as finding and spreading desired customer-centric behaviors, and 4) Converge, which is about aligning your EX and CX technologies and competencies. This may involve articulating a unified XM vision, establishing a shared Center of Excellence, and/or aligning your platforms and processes. Efforts within each of the four categories should lead organizations toward a better understanding of how and why employees drive CX and the changes they can make that will have the biggest impact on customer and business performance.
For more information, see:
– Blog: Four Categories of CX and EX Alignment
– Video: The Employee Engagement Virtuous Cycle
– Data Snippet: XM Leaders Enjoy Stronger Business Performance, 2022
– Research: The XM Diffusion Cycle