By Kent E. Frese, Ph.D. — Industrial-Organizational Psychologist and Founder, FactorFactory
HR leaders working in small and mid-sized businesses tend to share a frustration that rarely gets aired in public: they know which tools would meaningfully improve hiring, leadership development, and team effectiveness, and they cannot get those tools funded. The proposal lands on the CFO's desk, gets a polite skim, and comes back with a question — sometimes asked aloud, more often implied — that boils down to: "What will this actually return?" The honest answer requires more than a vendor brochure.
Funding decisions in privately held companies and lean public organizations are, in the end, capital allocation decisions. They are made by people who measure each dollar against the next available alternative. An HR leader proposing a leadership assessment program, a hiring battery, or a 360 feedback initiative is competing for budget against equipment upgrades, headcount, marketing spend, and operating reserves. The question is not whether assessments are valuable in the abstract — the research on that is settled. The question is whether this investment, scoped this way, will generate a credible return inside a believable timeline. Building the business case is the work of translating between two languages: the language of organizational psychology and the language of capital allocation.
This article walks through the four moves that distinguish proposals that get funded from proposals that get tabled. None of them require advanced finance training. All of them require resisting the temptation to lead with what assessments measure rather than what they change.
Frame the Problem in Business Terms, Not HR Terms
The single most common failure mode in assessment program proposals is leading with the tool. "We need a 360 feedback platform." "We should add personality testing to hiring." "Our leaders need a development assessment." Each of those statements answers a question the executive team has not asked. Funding moves toward proposals that begin with a problem the business already feels and end with a tool that resolves it.
Effective business framing names a measurable cost the organization is currently absorbing. Mis-hires in critical roles. Leadership turnover that has accelerated over the past 18 months. A leadership development spend that produces no observable behavior change. A succession plan that depends on assumptions about people who have never been formally evaluated. Each of these has a dollar figure attached — not always precise, but always estimable. Boushey and Glynn (2012) put the cost of replacing a single mid-level employee at roughly 21 percent of annual salary, and the cost rises sharply for senior and specialized roles. The Society for Human Resource Management's later research consistently places the all-in cost of a bad hire at one to three times the position's annual compensation when productivity loss, opportunity cost, and severance are included.
The reframe a CFO actually responds to sounds like this: "Our voluntary turnover in the leadership tier has cost us an estimated $480,000 over the past two years, and our last three external hires at that level required, on average, eleven months to reach full productivity. I am proposing an assessment program that targets both numbers — reducing miss rate on hiring and giving us a baseline measurement so leadership development can be tracked and adjusted." The assessment is now answering a problem the business already sees, not introducing one HR alone perceives.
Pick a Scope Your CFO Will Believe
The second move is choosing the right scope. Proposals get funded when the scope is small enough to feel low-risk and large enough to produce learning. Proposals stall when the scope sounds either trivial ("a pilot with three managers") or overwhelming ("deploy across all 240 employees in year one").
For most organizations under 500 employees, the right first scope is a leadership cohort — typically eight to twenty leaders — with a defined, measurable use case. A common version: pre-assessment of the existing leadership team using a multi-rater 360, a single-day debrief and development-planning workshop, individual development plans with one quarterly check-in, and a follow-up assessment at twelve months. That structure takes the program out of the realm of soft-skills enrichment and into the realm of measurable intervention. Day, Fleenor, Atwater, Sturm, and McKee's (2014) review of leadership development research found that programs incorporating both initial assessment and structured follow-up produced significantly more durable behavior change than content delivery alone.
Hiring use cases follow a similar logic. Adding a structured assessment battery to one or two critical roles — the roles where mis-hires are most costly — is a credible first scope. The proposal can show baseline data: average time-to-productivity, current twelve-month retention rate, average cost of replacement. The follow-up data, twelve to eighteen months later, becomes the empirical case for expansion. Schmidt and Hunter's (1998) meta-analytic work, which remains foundational in the field, established that structured assessment combinations consistently outperform unstructured interviews and resume review in predicting job performance — but the framing for a CFO is not the meta-analysis. It is the cost differential between a hire that works and one that does not.
The Numbers That Belong in the Proposal
Funded proposals contain four categories of numbers. Each of them needs to be defensible — not aggressive, not optimistic, just defensible.
Cost of the current state. What is the organization spending now, in dollars, on the problem the assessment program addresses? This includes turnover replacement cost, time-to-productivity loss for new hires, and the existing leadership development budget if the program is intended to make that spend more effective. Three years of historical data is usually enough to establish a credible baseline.
Cost of the proposed program. Total program cost, broken into year one and ongoing. For a leadership cohort of fifteen using a multi-rater 360, this might be the cost of fifteen assessment tokens, facilitator time for a workshop, and quarterly coaching check-ins. Token-based pricing models, which charge per assessment used rather than annual platform contracts, often produce a year-one figure under $15,000 for a program of this size — a number small enough to fit inside an existing professional development budget without a separate capital request.
Expected impact. What change does the program target, expressed as a percentage or absolute number? A common defensible target is reducing leadership-tier turnover by one to two percentage points, reducing mis-hire rate in critical roles from one in four to one in six, or producing a measurable shift in 360 feedback scores at twelve-month follow-up. Avolio, Avey, and Quisenberry (2010) modeled return on leadership development investment and found that even modest behavior change in a small cohort produced positive ROI when measured against turnover and productivity outcomes.
Payback timeline. When does the program break even, and when does it return its cost? For most leadership development programs in mid-sized businesses, payback timelines of twelve to eighteen months are credible. Anything shorter sounds inflated. Anything longer than three years competes poorly against alternative uses of capital.
Resist the urge to inflate. Vendor white papers routinely promote ROI figures of 400 percent, 700 percent, sometimes higher. Those numbers, when shown to a CFO, undermine the proposal. A defensible ROI estimate — one that names assumptions, shows the math, and uses conservative figures — wins more budget than an aggressive one that triggers skepticism.
Anticipating the Pushback
Three objections show up in nearly every assessment program proposal. Anticipating them in writing increases the odds the proposal moves forward on its first pass rather than coming back with questions.
"We already do this." Most organizations have some form of performance review, and many run engagement surveys. The proposal needs to name precisely what current practices do and do not measure, and where the proposed assessments fill a measurement gap. Performance reviews assess outcomes; assessments measure underlying behaviors and capabilities that produce or undermine those outcomes. Engagement surveys measure how the organization feels; assessments measure how individual leaders behave. Naming the distinction prevents the conversation from collapsing into "isn't this redundant."
"This sounds expensive." The cost framing matters. Comparing program cost against the cost of a single avoided mis-hire — a number the organization has likely paid more than once — reframes the conversation. So does comparing year-one program cost against the existing professional development line item. In many proposals, the assessment program does not require new spending so much as a redirection of spending already approved.
"How do we know it actually worked?" The honest answer is that the program will produce measurable data, and the data will tell you. Pre- and post-assessment scoring, twelve-month retention figures, and time-to-productivity numbers for new hires are all measurable. Phillips and Phillips's (2008) ROI methodology for learning and development provides a defensible framework for converting these into financial figures. The proposal should commit to a specific evaluation cadence, not promise a particular outcome.
From Practice
A 140-person professional services firm in the Midwest had been running an annual leadership offsite for six years. The format was familiar: an outside speaker, breakout discussions, a sentiment survey at the end. The HR director knew the program was not producing measurable change, but every attempt to redesign it stalled at the same point — the partner group questioned whether spending on "more leadership stuff" was justified.
The HR director's revised proposal made three changes. First, she reframed the problem as a measurable cost: three director-level departures in eighteen months, with replacement and ramp-up costs estimated conservatively at $310,000. Second, she scoped the program tightly — the existing twelve-person leadership team, a multi-rater 360 baseline, a development-planning workshop, and a twelve-month follow-up assessment, all inside the existing offsite budget. Third, she committed to specific evaluation metrics: change in 360 scores on three target behaviors, leadership-tier retention at the twelve-month mark, and a written report to the partner group at month twelve.
The proposal cleared the partner group on its first review. The program ran. At twelve months, 360 scores on the three target behaviors had moved by an average of 0.7 points on a five-point scale — modest but real — and leadership-tier retention was 100 percent over the period. The proposal for year two doubled the cohort and added a hiring component. The change between proposal one and proposal two was not the tools — those were available the whole time. It was the language.
Make the Ask Once, and Make It Well
Funding for assessment programs in small and mid-sized businesses tends to follow the quality of the case more than the quality of the tool. The tools are largely commoditized at this point — the validity research is established, the methodologies are well understood, and the practical differences between credible vendors are smaller than the practical differences between credible business cases. What separates a funded program from one that gets tabled is whether the proposal speaks the language the decision-maker uses when allocating capital.
FactorFactory was built specifically for organizations that need scientifically rigorous assessment tools without enterprise pricing structures or annual platform commitments. Token-based pricing means the cost in your proposal can scale with the size of the cohort rather than starting at a five-figure annual minimum. If you are working through a business case for an assessment program and would like to discuss scope and pricing relevant to your situation, reach out — we can typically respond within a business day.
