The Science Behind Our Assessments
Science & Practice
Evidence-Based Assessment Solutions
At FactorFactory, we bridge the gap between academic research and practical application. Every assessment in our platform is built on rigorous scientific foundations and validated through extensive field testing.
Our Scientific Approach
Psychometric Validation
All assessments undergo rigorous validation including:
- Reliability testing (test-retest, internal consistency)
- Validity studies (construct, criterion, content)
- Factor analysis and structural validation
- Cross-cultural and demographic analysis
Normative Data
Our assessments are standardized against:
- Representative population samples
- Industry-specific norms where applicable
- Regular norm updates and maintenance
Legal Compliance
All assessments meet or exceed:
- EEOC Guidelines for employment testing
- APA Standards for psychological testing
- SIOP Principles for assessment validation
- International testing standards
Modern Measurement Science
FactorFactory assessments are built on the latest advances in psychometric theory and measurement methodology. We don't simply repackage legacy instruments — we apply modern statistical frameworks to create assessments that are more reliable, more fair, and more resistant to response distortion than traditional tools.
Item Response Theory (IRT)
Classical test theory treats all items equally — a 50-item assessment produces a simple sum score. IRT is fundamentally different: it models the relationship between the underlying trait being measured and the probability of each response to each item. This allows us to:
- Estimate measurement precision at every score level — IRT reveals where an assessment measures well and where it loses precision, rather than assuming uniform reliability
- Place persons and items on the same scale — enabling direct comparison of item difficulty to respondent standing
- Detect misfit and aberrant response patterns — identifying careless or manipulated responses that classical methods miss
- Equate across forms — IRT parameters are sample-independent, allowing fair comparison across different test forms or administration conditions
Several FactorFactory assessments use IRT-calibrated items. Our behavioral instruments use Thurstonian models; our cognitive assessments use two-parameter logistic (2PL) models.
Thurstonian IRT & Forced-Choice Methodology
Traditional Likert-scale assessments (rate yourself 1-5) are vulnerable to response styles — acquiescence, social desirability, extreme responding, and deliberate faking. FactorFactory addresses this through forced-choice item formats scored with Thurstonian IRT:
- Paired comparisons — respondents compare two behavioral statements and indicate which is more characteristic, eliminating the ability to endorse everything highly
- Graded paired comparisons — our 4-point response scale captures degree of preference between statements while maintaining the forced-choice advantage
- Thurstonian scoring model — developed by Brown & Maydeu-Olivares (2011), this framework recovers absolute trait levels from comparative (ipsative) data, solving the longstanding limitation of traditional forced-choice instruments
- Faking resistance — because respondents must trade off between equally desirable statements, response distortion is significantly reduced compared to single-stimulus formats
Our DISC, Leadership Values (LVA), and Communication (FFCA) assessments all use this methodology.
Item Format Design
We use several evidence-based item formats, each chosen for the specific construct being measured:
Graded Paired Comparisons
Used in DISC, LVA, and FFCA. Four-point scale between statement pairs. Scored with Thurstonian IRT. Resistant to faking.
Likert-Type Rating Scales
Used in ELLSI and AL360. Five-point agreement/frequency scales. Scored with graded response IRT models. Optimal for broad trait coverage.
Scenario-Based Items
Used in Reasoning assessment. Workplace scenarios with multiple-choice responses. Scored with 2PL IRT model. Measures applied cognitive ability.
Multi-Rater Aggregation
Used in AL360. Combines self, supervisor, peer, and direct report ratings with category-weighted aggregation and gap analysis.
Theoretical Foundations
Each assessment is grounded in established psychological theory, not ad hoc item writing:
- DISC — William Marston's behavioral model (1928) with modern Thurstonian IRT measurement and 24-type profile system
- ELLSI — Five Factor Model of personality (Costa & McCrae, 1992) adapted for workplace applications with IRT-calibrated items
- AL360 — Grounded in Self-Determination Theory (Deci & Ryan), Psychological Safety (Edmondson), and Adaptive Leadership (Heifetz)
- LVA — McGregor's Theory X/Y framework (1960) measuring leadership philosophy on the control-to-empowerment spectrum
- FFCA — Johari Window framework (Luft & Ingham, 1955) measuring Exposure and Feedback Seeking communication behaviors
- Reasoning — Cattell-Horn-Carroll (CHC) theory measuring Fluid Reasoning (Gf), Comprehension-Knowledge (Gc), and Short-Term Working Memory (Gwm)
Why Evidence-Based Assessment Matters
- Better Decisions: Validated assessments predict job performance and success
- Legal Protection: Properly validated tools reduce legal risk
- ROI Justification: Scientific validation demonstrates assessment value
- Fairness: Rigorous testing ensures assessments are unbiased
- Credibility: Science-based tools enhance professional reputation
Our Commitment to Quality
FactorFactory is committed to maintaining the highest standards of assessment quality:
- Regular review and updates of all assessments
- Ongoing validation studies and norm updates
- Collaboration with I-O psychology researchers
- Transparent reporting of psychometric properties
- Continuous improvement based on user feedback
Research Partnerships
We collaborate with researchers and institutions to advance the field of assessment science. Our platform provides:
- Infrastructure for assessment development and deployment
- Data collection capabilities for validation studies
- Publishing platform for new assessment tools
- Bridge between academic research and practice
Experience the Difference
See how scientific rigor translates to better organizational outcomes
Get Started