Sales Performance Management from implementation perspective: what's important for the Clients and how the SPM systems stand against the needs.
In this article, I would like to put some light on how we understand the SPM (or Sales Performance Management) systems from our SPM integrators' perspective.
By doing this I hope to add some practical view to the SPM theory.
I also hope to show how fascinating the world of sales compensation is. For if it's true that selling is the most important job on earth (the logic goes like this: everything that is exchanged must have been sold - and free exchange is the foundation of the civilized world) - then compensating for sales must be a noble (and in fact - pretty complex) cause. Worth having sophisticated IT systems behind it.
What is the Sales Performance Management (SPM) system? The answer might not be obvious even for seasoned Compensation and Benefits professionals. It is certainly not obvious for non-specialists who don't happen to work at HR and might know little about the fascinating world of sales compensation.
Quick look at SPM phrase in google brings us to software producers websites, where we could learn that SPM is:
a data-informed approach to plan, manage, and analyze sales performance at scale, drive revenue, and sustain an enterprise’s leadership position in its industry. And it’s becoming a necessary tool for organizations to achieve agile operations in today’s fast-paced markets.
While this is certainly true, from the practical business perspective, nobody comes to us and says: " I need an SPM system so that I can design a Sales Plan". The Clients rather come with PDFs ready and say: "Hey, these are our sales plans. Can you make sure that they calculate right?". My point is that what Customers need, first and foremost, are working commissions calculations so that they can compensate their salesforce. Secondly, they need to be able to change the calculation rules themselves - with no IT involvement whatsoever. Everything else is secondary from that perspective. Important but secondary. With no sound and flexible sales commission calculation engine, no SPM system can thrive.
Paraphrasing the famous real-estate world phrase, I would argue that calculations are the core of SPM. In the course of our implementations, we have seen multiple times that the customers keep the old calculation methodology alive, whatever that might be - an outdated Cobol-based system, an in-house add-on to SAP, or a sophisticated Excel spreadsheet that they have used for the last X years. Until the system we implement proves the same (or better) calculation outcomes, the two systems would run together in parallel. And sometimes it does happen that the outcomes differ! And then typically it turns out that there was something wrong in the legacy system. Either the formulas shifted a bit in the Excels or there is a corner case that only that lady from the 10th floor knows about.... Implementing the new system is therefore a chance to take a second look at the calculations and get them straight. It might even happen that the cost of investing in the new tool will pay itself off simply by correcting formulas and stopping overcompensating (or under compensating, as that might as well be the case).
Because the core of SPM - the calculations - are so important, sometimes the Sales Performance Management entity is conceptually divided into SPM and ICM. ICM - Incentive Compensation Management - is therefore seen as the core of SPM. It contains calculation rules obviously. It should also have the flexibility to change them at will with no IT or system integrator support. It should work for pre-defined, however flexible, populations of payees - allowing the Comp & Ben professionals to decide which plan goes where and who it affects. It should also have the Quota and Territory management - so that quotas can be set and crediting for transactions is possible.
There are only two issues with the approach mentioned above:
"Gartner defines incentive compensation management (ICM), which includes standard reporting and analytics, as the principal of three core capabilities for SPM — which incorporates territory management and quota management. These core capabilities link to the following “near-core” capabilities: quota planning, territory planning, advanced analytics, gamification and objectives management." (go here for reference)
can be easily confused with STI (Short Term Incentives Management) - that is with the management of popular bonuses, that concern not only commissions for sales (defined as compensation paid per attaining against quota in the given territory) but also achievements for KPIs.
This suggests that the SPM system should therefore incorporate bonus management - an ability to set, manage and calculate bonuses. I find it also true from the implementation's perspective. For this is not always the case that salespeople are compensated solely on the basis of transactions that they made in a given time period. There might be other qualitative and quantitative goals the Management may wish to incorporate into the SPM solution.
Let's take a look at the below example from the classic How To Design and Implement Plans That Work: The Complete Guide to Sales Force Incentive Compensation by A. Zoltners, P. Sinha and S. Lorimer compensation book.
In the example we see two plans. Plan G looks like something that we are used to having in SPM systems. Plan H however gravitates towards a bonus rather than a sales plan:
Plan H encourages salespeople to cultivate long-term relationships with their customers. A salesperson who is paid under this plan is more likely to ‘‘do the right thing’’ for a customer, even if it means sacrificing short-term sales. Plan G encourages a different relationship between salespeople and their customers. It encourages salespeople to ‘‘do what it takes to make a sale’’ and then move on. Often, companies with pay plans like Plan G have separate service organizations to meet the ongoing needs of customers. That way, the sales force can focus all of its energy on selling.
For me, Plan G is a Hunter's plan, while plan H is Farmer's.
If you are not familiar with Hunter / Farmer terminology, I refer you to this awesome example by Tom Abbott as below:
My point is that the SPM system should be able to cope with both types of plans. Otherwise it will not be complete.
If I were to bet on what's the second most important thing for the Customers (next to making the calculations right), it would be flexibility - the ability to change the calculation rules themselves.
Based on my experience, most of the SPM systems utilize some form of a graphic designer and/or pseudocode writing capability (entered or dragged and dropped into system's console) for constructing formulas that are then translated into SQL. SQL is then transferred to the backend, and evaluated by the underlying database engine (yes - the databases not only store data, they also calculate!). Finally, the database returns calculated values that are presented to the user. It is important how easily these rules are definable and what it takes to change them.
But flexibility is not only about the ease of changing formulas or writing new calculation rules. It is also about ability to freely and easily define quotas, payout curves, comp ceilings and floors, hierarchies and who the plans affect (and who they should not affect).
In today's Software as a Service world, nobody wants to rely on IT people to implement these types of changes. The Clients don't want to call the integrator either. They want to be able to change much more than basics by themselves. And we fully understand that. We are always there to help with bigger things (like implementing spiffs of discretionary adjustments) but in a well - executed SPM/ ICM system, we should not be helping only because parameter X should now have the value of z, not y.
Having said that, let's look at other features that SPM (understood in a broader sense than just the calculations) should have.
According to Gartner, SPM is
a suite of operational and analytical functions that automate and unite back-office operational sales processes. SPM is implemented to improve operational efficiency and effectiveness. Capabilities include sales incentive compensation management, objectives management, quota management and planning, territory management and planning, advanced analytics and gamification.
As mentioned earlier, the calculations are the core of SPM - as this is the feature expected foremost by the Business. However, many things have to happen to make the calculations the part of the work possible.
Therefore I would define the hearth of SPM (often referred to as ICM, as mentioned earlier) as the system of the components, as shown on the diagram below.
Every SPM system utilizes these components this way or another.
Sometimes this core is referred to as the five Cs - as shown on the diagram 1-2 below.
Diagram 1-2 5Cs of core SPM (ICM)
Calculations won't happen, without these elements working together towards the goal of calculating figures and feeding them into payroll outbound interface.
The other supporting elements of the SPM are:
I believe that the ecosystem of these elements constitutes the full Sales Performance Management System in Gartner's meaning of SPM. And this does not have to mean that everything is packed in one tool. It can be as well a system of tools that enrich each other and act together in executing the expected actions.
Further into implementation perspective
When I look further into the business perspective, I see that the sales plans are often complicated. Projecting these plans on an SPM system is certainly a challenge. It is not only that the system is capable of featuring them functionally. It is equally important that the implementation team knows what the plans mean business - wise and can translate them into a components setup and mathematical formulas that - when executed - behave exactly as described by the sales plan.
Complicated or not - whatever the case, the sales reps sail well in meanders of the plans - they most likely keep their own calculations on the side, alarming the parties involved whenever anything is not right with their plan. And that's completely natural, it keeps the systems in accuracy and balance.
One thing is certain in that instance - the more complex the plans, the more likely a solid dispute management embedded in the system will be needed.
I think it looks something like this:
Diagram showing relations between plan complexity and disputes raised. I think it is exponential.
It is not backed by any study - however, it seems logical. I will gladly hear Comp & Ben specialists having a word on that
I hope what I wrote about calculations and SPM makes sense and I managed to shed some light on the fascinating realm of Sales Performance Management's calculations and its other vital elements. To summarize key points that I made:
In the next steps, we will examine why organizations utilize the variable pay concept and therefore need SPM systems to help manage it.
So far we focused a lot on the core of the SPM system - that is on calculations.
Now I would like to take a step back and consider why the calculations are needed at all. In other words, I would like to explore from a business perspective why there is a need of calculating anything in SPM in the first place.
Let's then explore what are the drivers of that primary business need and how SPM responds to it.
It seems to be a safe statement that in mature economies, more than half of the companies utilize the variable pay concept. The idea seems to originate back in 1980 in the US where the pay-for-performance concept was introduced by the American Compensation Association. This is at least how the roots of the concept Lori Wisper from Willis Towers Watson sees in this interesting article on purpose-driven pay for performance.
Nowadays, according to Salary.com:
77% of companies in the U.S. are using variable pay programs as part of their total rewards packages.
Europe and rest of the world may by lagging behind, however it seems clear that pay for performance concept is well settled. It seems logical that the organizations are willing to pay for results. When a company is well aligned internally (that is when things that people do help the company to achieve its worthy goals), then individual excellence develops and enriches everybody around and adds to the company's global results. That seems a good reason to compensate for.
The type of work people do also changed. For many types of jobs, it is now possible to work from almost anywhere and for almost anyone globally. Supervision and micro-management are difficult and generally not welcomed. People seem to value the flexibility and benefits of working independently. The traditional concept of a worker diminishes and ideas emerge to treat people working together more independently and foster and value people's entrepreneur spirit.
This attitude is visible in many places, from Elon Musk's Tesla Handbook to Naval Ravikant's - a person who contributed to growing Uber, Twitter, Opendoor, Stack Overflow, Clubhouse, Notion and many other successful products - attitude and philosophy.
For this to happen however, team mates and coworkers need to accept personal responsibility for their undertakings and results. And here is where the variable pay comes into play.
When the responsibility is accepted, the difference between being paid well and not receiving much lays in employees’ very hands.
Variable pay comes in many shapes. It might be company-wide bonuses, team bonuses, individual bonuses, spot bonuses, profit sharing, and other short-term incentives, most likely based on objective KPIs, and weighted against the OTB (On Target Bonus). It might also come as Long Term Incentives (deferred bonuses) driven by regulations and deferral matrices.
However, the place where I think the variable pay plays the biggest role is Sales. It then comes often in a form of a commission paid for transactions made, or in a form of bonuses for salespeople (usually "farmers") for certain behavior or reaching long-term goals.
The experts in the field suggest that there are at least four reasons for the importance of variable pay in Sales:
1. Salesforce causality.
In general, the more influence the sales force has on the fact that the customer makes the transaction, the greater variable pay there should be. There are industries or brands where making sales is relatively easy (e.g. FMC goods) and industries (e.g. dedicated enterprise-scale software, manufacturing automation, and robots) for which the sales cycle is long and the salesperson needs understanding and involvement is crucial. And the more difficult and less certain the sales are, the more it attracts risk-takers who are willing to work for little or even nothing in anticipation of a big transaction that they devote their time to. Therefore the prize must be generous to attract such talented risk-takers as Ian Koniak from SalesForce for example.
2. The output in Sales is usually measurable.
It is typically possible to measure sales in terms of revenue or even profit that the transaction brings. Sales results are also comparable in monetary terms between sales people and sales territories. This means that the goals are objective and relatively clear and using variable pay for sales seems natural and just.
3. Sales have a direct impact on company performance and the very existence.
With no sales, no company can exist for long. Sales impact directly company revenues and their very existence. Therefore there is a natural inclination to pay for sales.
4. Variable pay takes away micromanagement and tight supervision.
When companies pay for performance in sales, the achieved results and therefore the abundance or lack of compensation allows for the salespeople’s self-control. They receive immediate and sometimes harsh feedback from the world about their professional actions.
5. Success Acknowledgment
People working in sales often suffer rejection and bad emotions. The variable pay helps to rebuild confidence, re-assure a salesperson, and increases morale.
These ideas are described in more detail in the Complete Guide to Sales Force Incentive Compensation book on pages 11-12.
According to the Harvard Business Review , the ideal ratio adopted by the markets seems to be 60 % fixed pay, 40% on target variable pay. It varies, however, from industry to industry.
The more causality of sales force in customer decision making, the more variable pay there should be. Also, business literature suggests that approach, see the article by Financial Express.
The no-cap rule might be also a good idea as it will remove the "max-out and quit" issue, allowing the best people to continue to be motivated, regardless of how well they perform.
Finding the right salary-incentive mix is a vast exercise on its own, in which the proper usage of company data, simulations and business intelligence reporting of an SPM system would help.
The Compensation Business Framework proposed by A. Zoltners, P. Sinha and E. Lorimer in the Sales Force Incentive Compensation book looks like below:
I believe it sets things in perspective.
It shows that the Company has direct control on the Compensation Scheme only. The sales people choose the activities that they want to take. If the compensation scheme is wrong or unfair they might as well stay home. Also the Comp Plan is one of a few and also there are indirect tools that the company can use to drive Customers' decisions. By setting the plans right, the organization can drive desired sales people behavior (e.g. focusing on Clients long - term needs, or promoting a new line of products), therefore indirectly influencing the Customers decisions.
The important point here is that, if you have only one point of influence over salespeople’s behavior and customers' decision (that is the sales plan and management of its execution), why not doing it right? If you still do it via Excel spreadsheets and emails, I think the time is right to go for something more secure, robust and full - featured.
An interesting article by P. M Madhani suggests that you can even beat the business cycle (or at least reduce the size of an issue) by maneuvering salespeople’s focus and goals through sales compensation plans.
The typical business cycle looks like this:
The idea presented by the M Madhani is that in recession (contraction) phase, the companies can be proactive, they can lower fixed pay cost and focus on the variable pay-for-performance incentives, especially these which will allow to keep the critical customer relationship.
The Author puts it this way:
Sales organizations that freeze or cut salaries or pay below market rates during expansion stage of business cycle will risk losing valuable sales employees and will struggle to attract the best sales talent. On the other hand, sales organizations that pay too much during recession period will risk damaging their financial health and ability to hire the sales employees they need to thrive in difficult market conditions
This makes the importance of flexibility of the IT compensation systems, such as SPM strategically important
Having the sales strategy or sales compensation plans wrong can be a mortal sin for a company.
It can make a difference in life and death of the organization, or at least serious troubles.
Exactly this happened to CrossComm, a former Cisco's competitor at the time of emerging market of networking and routers. They got into troubles by loosing focus on the flagship product in anticipation of release of a new product line. Sales force heated the atmosphere with the existing customers and lost interest in selling the existing hardware, in anticipation of the new, better gear. Unfortunately, glitches in the new product delayed its proper introduction to the market, meanwhile sales of the existing product shrunk rapidly. It almost annihilated the company and contributed to its take over by another competitor - Olicom based in Denmark.
Another famous sales compensation plan misalignment (which I experienced first hand, being one of the bank's customers) happened at Wells Fargo, one of the major US mortgage banks.
The below quotation from Forbes explains what happened:
The problems began when Wells Fargo executives pressured rank-and-file bank personnel to aggressively cross-sell products to enhance sales and revenue to meet certain quotas. Deception reared its ugly head when Wells Fargo employees then created millions of savings and checking accounts for customers without their knowledge or approval.
The case cost the bank a fortune and seriously damaged its reputation.
I hope that by this article I managed to place the Sales Performance Management systems (and by a ricochet, Total Compensation - related systems) into a business perspective. I believe that while implementing SPM systems or Total Comp systems, it is important to understand why the functionalities are needed in the first place and what purpose they fill in.
This is our goal while serving our customers at GGS - to understand the business side equally to the technical side of it. Because the business side is equal or maybe even more important. Even the best implemented feature will not do its job when there is no understanding of a business reason behind it.
The next point I would like to focus on in SPM is data, and more precisely, what can go wrong with it and how to avoid issues.
The truth is that when you are ready to improve your SPM experience, either by establishing a new tool, creating a self service portal for sales people or providing insightful reporting, one thing will probably slow you down along the way if you don't pay proper attention to it. That is your data.
With no good data the things won't fly, no matter how mature or cultivated they are. It will simply not work. And our number one consulting experience at GGS when it comes to data is that the Clients tend to think that the data is in better shape that it really is.
I think there is a plausible explanation of this common phenomenon.
I think these are the top reasons why people generally think that their HR and SPM data is generally in a better shape than it really is.
The below list is based on real-life experience of Consultants implementing or adjusting the SPM systems at their Client's. Therefore, the list is meant to be highly practical and should be helpful in avoiding common mistakes.
It happens that while changing the systems (or enhancing an existing one), we come across a situation that was not handled or foreseen until the change is about to happen. For example, we might have not prorated the commissions so far and now we would like to start prorating.
Imagine a situation when a company has only quarterly sales plans and now it would like to move to semi-annual or annual plans. Keeping a track of who "sat" where, for how long and in what plan, might not have been important for the company so far (potentially they allowed position, grade or title changes only at the start of a new quarter) - however, now with the new approach suddenly it become important to know exactly who changed roles and when. Targets and attainment proration will depend on that data. The company therefore needs to either start collecting this data or start provisioning the data from the HCM to the SPM tool - if that data is already available.
These inaccuracies spawn at the edge of data and its understanding.
The best way to explain this type of inconsistency is to say that data has meaning to people. If two or more people understand the same data in a different way then the errors in system logic are unavoidable. To quote one of the GGS consultants: "simply the logic behind the data [in that instance] proves to be something else than assumed, therefore, data meaning is different and so, data can be wrongly used".
Imagine a situation when a data entity is called a "transaction date". It might mean various things for various people. Transaction date might mean the date when the contract is signed by the Client or when it is counter-signed, on when the "deal" is accepted by Legal or Accounting or other arrangements are made. If the commission is based on the transaction date, it is important that the people filling in this data know which date is expected as >>the<< date of crediting the transaction, and fill it in accordingly. Otherwise, we might end up commissioning transactions that did not really happen just yet.
Another real life example is "join date". In one of the projects, we discovered that the "join date" is actually the date of rehire, which changes the perspective for certain corner cases. Or it happened that the "leave date" is filled in for people who still work at the company - which might have dire compensation consequences and can create chaos.
These types of errors happen when the people from Comp & Ben are not aware of the logic and true meaning of data stored and collected in HCM systems and other (e.g. sales, transactional) data sources.
This type of issue is not data issue per se, however it can significantly hamper the implementation progress.
It happens that the implementation team and data owners agree on a specific format in which data will be delivered (i.e. they arrange technical specifications document), yet, the data provided does not keep the agreed format. It could happen in all types of integrations, no matter if these are API integrations, flat files or data pulled from intermediate databases.
To visualize the issue in a simple example, imagine that in the data integration specifications, it is agreed that a semicolon will be a column delimiter in a flat (csv) file. And then it happens that a semicolon is itself used somewhere as a part of the data, e.g. in the comment fields. Then, the file structure can't be processed as agreed and the integration job fails (because the system "thinks" that there are more columns in the file than there really is). Another example could be a character encoding issues, wrong end of line characters or missing nodes in JSON file etc.
These types of issues, although not critical on their own, can seriously blow up the data integration effort if they repeat unexpectedly and constantly.
It could be a challenge to establish the good, flawless data connection and it takes time to do so. So it sometimes happens that the projects continue to develop on poor quality data, without running integrations, which typically brings extra effort for everyone, once the good connections are established.
The below quote from Pedro Nunes, one of the Solution Architects at GGS provides insights into these types of situations:
We had cases where the files were not coming according to the agreed specification documents (the wrong end of the line, column order changed, wrong sorts). We had a lot of file data validations in place (expected according to the specifications) but this wasn't planned items, so the file would just fail. It was not something we could "fix later" but we had to spend time investigating why it failed and communicate the reasons to the source.
Temporal data is the data that refers to time instances. By storing temporal data, one is able to "tell the story" on how the situation that the data describes evolved over time. This type of data usually has, according to wiki:
Temporal data can be stored in many ways, e.g. with start and end dates of the specific situation, or just start dates (sometimes called an effective date) for each change of the state of the facts stored in the temporal table. In that case the end dates of the situation are implicitly calculated by using the start dates of the subsequent entries.
There can be a few issues with temporal data:
Not all the IT systems generate or store effective dates. Sometimes it happens that we are asked to >>effective date<< certain information according to the time at which this information has been provided.
This is a risky situation as the time of receiving the information to the system becomes then the start date (or effective date) of the temporal situation. In that case, any failures in receiving the data changes the picture, or the "story" that the data tells. It might then happen that the story is wrong. Therefore we recommend not using this approach in the SPM or Comp & Ben systems.
Another thing to consider with temporal data is the size of the facts tables. Imagine that a system holds a hundred of attributes of employee demographic data and let's say a base salary is included in that data. Then all the hundred attributes need to be repeated again in the underlying database upon each base salary change. And that is just one of the attributes that might change. Therefore having one big temporal data table can mean a lot of unnecessary data entries and storage that repeat the information that is already stored for perhaps not a good reason.
It also happens that certain thing is described with more than one data object.
A classic SPM example would be information about transactions / contracts that consists of transaction id, customer id, transaction date, transaction amount, transaction currency etc. Typically, the customer id is then defined in another data object that includes the same customer id, and information that describes the customer, i.e. the customer name, address and so on. If it happens that the SPM system receives the customer id together with a transaction / contract entry, and the same customer is not defined in the other data object, the system will not be able to translate the customer id into customer name and to display that information properly.
Issues like that are defined as referential or dictionary data issues. Standard practice is to drop the data upload that contains these types of issues. It might lead to not having transactions credited, commission miscalculated and disputes raised by sales persons - simply due to the referential data issues.
SPM systems that are being established need good test data. And by the good test data in this context we mean:
Let's take a look at each of these points:
It is important that the system changes are tested with the data of similar complexity to the real data used in live systems.
For example testing payout curves is more thorough when attainment values checked against the curve cover all the possible ranges. It is better to test the attainment of 0 %, 50 %, below 100% above 100%, below the cap, above the cap and so on rather than to keep the attainment at 100% for everybody for testing purposes.
This point is similar to the point above, with the stress on the corner cases, that is the cases that are directly at the edge of ranges of values. For example, it is good to have test data that covers the situation as of December 31 of a given year and as of 1st of January of the following year. Another example - test a position or title change that happened directly at the year end and see how the system behaves in a situation like that. Or test the payout when attainment value exactly hits the ceiling or a booster (although having ceilings for sales plans does not seem to be a good practice, as the literature says.)
Having test data that is hard to re-create will create pressure on the people testing the system. They will be afraid of breaking something or overriding some test data, because they know that re-creating it will be hard. They will therefore be reluctant to put pressure on the system and test corner cases. They will know that each test or each mistake will cost them time to recreate the data, especially if it is re-created manually via application's UI rather by just running a script or re-uploading the test data.
For these reasons it is good to maintain a clear-down and recreate script, preferably that creates randomized good test data if possible.
It does happen that the system behaves great with test data and is of top-notch performance.
Everything then goes well until the Go-Live, when the system is confronted with millions of rows of data that has not been confronted with ever before.
Then the performance issues might start and the whole project effort is jeopardized simply by not having enough test data loaded in the first place.
The test data should reflect the references that the real data will have. For example, when there is a test customer id, there should also be a test customer name associated with it. It should reflect real life scenarios and references as closely as possible.
Preferably, the system should be checked and tested also by people who will use the system daily. Ideally, they should see a demo of the new system (or changes to the existing one) every two weeks and they should be able to check and play with the system afterwards. People will react better to the system when what they see is meaningful to them. For example, when sales people names are real names rather than just character strings, or when the sales structure makes sense etc. Otherwise, the users will be lost easily and will be averse to the system.
Therefore a good thing to remember is that: people who will use the system daily react better to the testing when data that they see in the system is meaningful to them.
Here is a little hint from the implementation perspective that we learned in the course of over five years of delivering new systems:
If I were to summarize the SPM data topic with one point I would say that it is important to create the time and energy buffer for data issues before the project starts.
Most likely, there will be data issues at the start. It will take time and energy to fix them. However, after some time of intensive effort and collaborative work, the data interfaces will be established, data issues fixed and the data will not become an issue anymore.
I hope this data topic has been meaningful and helpful.