TestMi uses peer-reviewed methods to assess maturity, predict adult height, and flag injury risk in young athletes. Here's how each one works and what it means for coaches.
The Mirwald maturity offset equation estimates how far an athlete is from their peak height velocity (PHV) — the point during adolescence when they are growing fastest. This gives coaches a “maturity offset” in years: a negative value means the athlete has not yet reached PHV, and a positive value means they have passed it.
The equation uses standing height, seated (sitting) height, weight, and chronological age. Separate equations are used for males and females.
Whether an athlete is early, on-time, or late in their physical development relative to their age. This helps with bio-banding (grouping by maturity rather than chronological age), understanding performance differences, and adjusting training loads during growth spurts.
The Mirwald equation is a prediction — not a direct measurement of PHV. It works best for athletes aged approximately 10–16 and is less accurate at the extremes of the age range. Accuracy depends on correct measurement of sitting height, which requires a trained measurer.
Reference: Mirwald, R.L., Baxter-Jones, A.D.G., Bailey, D.A., & Beunen, G.P. (2002). An assessment of maturity from anthropometric measurements. Medicine & Science in Sports & Exercise, 34(4), 689–694.
The Khamis-Roche method predicts an athlete's adult (final) height using their current height, current weight, and the heights of both parents. It is one of the most widely used non-invasive methods for height prediction in paediatric sports science.
TestMi asks coaches whether parent heights were clinically measured or self-reported. Self-reported heights are adjusted using Epstein et al. correction factors before being used in the prediction, as self-report tends to overestimate height.
An estimate of how tall the athlete will be as an adult, along with a 90% confidence interval. Combined with current height, this gives the athlete's percentage of adult height — a useful indicator of how much growth remains.
Accuracy depends on accurate parent height data. The method was validated on a North American population sample and may be less accurate for athletes from other genetic backgrounds. It works best for athletes aged 4–17.5.
Reference: Khamis, H.J. & Roche, A.F. (1994). Predicting adult stature without using skeletal age: the Khamis-Roche method. Pediatrics, 94(4), 504–507.
Correction factors: Epstein, L.H., et al. (1995). Estimation of stature from self-report. American Journal of Epidemiology, 142(8).
TestMi calculates height velocity (cm/year) from repeated height measurements across testing sessions. Rapid growth is associated with increased injury risk in young athletes — particularly for apophyseal injuries (e.g. Osgood-Schlatter, Sever's disease) and muscle-tendon overuse injuries.
The Growth Tracker view shows each athlete's current growth velocity and flags those in a rapid growth phase. Risk levels (Low, Medium, High) are based on the combination of growth velocity and proximity to PHV.
Which athletes are currently in a growth spurt and may need modified training volumes, reduced plyometric loading, or closer monitoring for overuse complaints. This is not a diagnostic tool — it is an early warning system that supports informed coaching decisions.
Growth velocity requires at least two height measurements taken at different times. Accuracy improves with more frequent measurements (ideally every 6–10 weeks). A single anomalous measurement (e.g. incorrect technique) can produce misleading velocity values — TestMi flags suspected measurement errors automatically.
All of the methods above depend on accurate measurement data. TestMi includes automatic anomaly detection that scans an athlete's measurement history and warns coaches about potential data entry errors — such as height decreasing between sessions, implausible growth rates, or sitting height exceeding standing height.
These checks run automatically when session data is saved and when viewing an athlete's profile. They are non-blocking — coaches can choose to correct the data or proceed if the values are correct.
TestMi's implementation of these methods was developed and validated in partnership with Move4Sport, a specialist youth S&C coaching provider. Their coaches — with backgrounds in Olympic sport, professional football, and international tennis — tested the platform across multiple sports and age groups.
The calculation logic is covered by automated tests verified against published reference values. If you have questions about the methodology or would like to discuss the implementation, please get in touch.
TestMi uses session Rating of Perceived Exertion (sRPE) to quantify training load. Each session load is calculated as RPE (1–10) multiplied by session duration in minutes. Weekly totals are broken down by training type, giving coaches a clear picture of how load is distributed across activities.
The Acute:Chronic Workload Ratio (ACWR) is calculated using the uncoupled method: the acute load is the current week's total, and the chronic load is the average of the available previous weeks (up to three). This requires at least two weeks of prior data before a ratio can be computed.
ACWR values are mapped to four training zones: Under-training (<0.8), Sweet Spot (0.8–1.3), Caution (>1.3–1.5), and Danger (>1.5). These zones help coaches identify when an athlete's load has spiked relative to what they have been prepared for.
Whether an athlete's current training load is within a safe range relative to their recent training history. Spikes into the Caution or Danger zones are associated with increased injury risk and may warrant load reduction or modified programming.
RPE is a subjective measure and can be influenced by mood, fatigue, and athlete experience. TestMi does not capture external load (e.g. GPS, accelerometer data). The ACWR zone thresholds are derived from population-level research and may not perfectly apply to every individual athlete or sport context.
Reference: Gabbett, T.J. (2016). The training—injury prevention paradox: should athletes be training smarter and harder? British Journal of Sports Medicine, 50(5), 273–280.
TestMi includes daily subjective wellness tracking across four dimensions: Sleep Quality, Fatigue, Soreness, and Stress — each rated on a scale of 1 to 5. Sleep hours are tracked separately as an objective complement to the subjective sleep quality rating.
Wellness data feeds into other areas of the platform, including RED-S risk assessment and menstrual cycle pattern analysis. By tracking these indicators over time, coaches and athletes can identify trends that may signal overtraining or under-recovery before performance declines or injury occurs.
An early warning system for athlete wellbeing. Persistent low scores across wellness dimensions — particularly when combined with high training loads — may indicate that an athlete needs recovery time, workload adjustment, or a conversation about factors outside training.
Reference: Saw, A.E., Main, L.C., & Gastin, P.B. (2016). Monitoring the athlete training response: subjective self-reported measures trump commonly used objective measures: a systematic review. British Journal of Sports Medicine, 50(5), 281–291.
TestMi supports four-phase menstrual cycle tracking: Menstruation, Follicular, Ovulation, and Luteal. Athletes can log their current phase, flow level, cycle day, perceived impact on training, and up to 12 symptoms with severity ratings (0–3) on a daily basis.
After three or more tracked cycles, the platform generates personalised pattern analysis. This computes per-phase averages for fatigue, sleep quality, soreness, and stress, identifies the most common symptoms in each phase, and produces actionable insights tailored to the individual athlete.
Cycle phase is overlaid on ACWR charts, allowing coaches and athletes to consider training load in the context of the menstrual cycle. All cycle data is subject to explicit access permissions — athletes control who can see this information.
How an individual athlete's wellness, symptoms, and perceived readiness vary across their cycle. This supports informed conversations about training periodisation and recovery — but the system emphasises individual patterns rather than population-level assumptions about cycle phase effects.
Inter-individual variability in menstrual cycle effects is enormous. The system deliberately focuses on building each athlete's personal baseline rather than applying generalised recommendations. Pattern analysis requires consistent logging across at least three cycles to produce meaningful insights.
Relative Energy Deficiency in Sport (RED-S) is a syndrome caused by insufficient energy availability relative to the demands of training. TestMi includes an automated screening tool that monitors key indicators and flags athletes who may benefit from further assessment.
The screening uses a traffic-light model: GREEN (no current concerns), YELLOW (review recommended), and RED (referral to a medical professional recommended). The system monitors for secondary amenorrhea (three or more consecutive missed periods), delayed menarche (onset after age 15), oligomenorrhea (fewer than nine cycles in twelve months), persistent fatigue combined with poor sleep, and frequent appetite changes.
Whether an athlete is showing warning signs that warrant a conversation with a medical professional. RED-S can affect bone health, hormonal function, metabolic rate, and long-term athletic development. Early identification is critical.
This is a screening tool, not a diagnostic instrument. It is designed to prompt conversations between coaches, athletes, and medical professionals — not to replace clinical assessment. The indicators monitored are a subset of the full RED-S clinical assessment tool.
Reference: Mountjoy, M., et al. (2018). IOC consensus statement on relative energy deficiency in sport (RED-S): 2018 update. British Journal of Sports Medicine, 52(11), 687–697.
TestMi uses T-score standardisation (mean = 50, SD = 10) to allow meaningful comparisons of test results across different metrics. This places every score on a common scale where 50 represents the average of the reference population and each 10-point increment represents one standard deviation.
Normative datasets are resolved using a three-tier system: first, any athlete-specific dataset assigned to the individual; second, a sport-level default dataset; and third, auto-generated platform norms computed from all athletes in the same sport, grouped by test type, gender, and age. Platform norms require a minimum sample size of five results to be generated.
How an athlete's test results compare to an appropriate reference group. This contextualises raw scores — a standing long jump of 180cm means very different things for a 12-year-old swimmer and a 16-year-old sprinter. Athletes can also be excluded from normative comparison if the available datasets are not appropriate.
Normative comparisons are only meaningful when the reference population is relevant to the athlete being assessed. Auto-generated platform norms reflect the current user base and may not be representative of broader populations. Coaches should consider whether the comparison group is appropriate before drawing conclusions.