Athletics competitions are comprised of different events including the long jump. As horizontal speed is highly correlated with long jump distance (r = 0.70-0.95) (Hay, 1993), training plans in the long jump are often very similar to sprinting (e.g., mainly comprised of acceleration, maximal velocity, resisted and assisted sprints, and resistance training) (Haugen et al., 2019) with actual technical jumping work normally being only ~10-15% of elite long jumper’s programs (lead author’s unpublished observations and communications with international level coaches). However, there is little research on the training practices and training load (TL) monitoring of Olympic level long jump athletes, especially when compared to elite endurance and team sports (McLaren et al., 2018; Mujika, 2017). There are a number of different measures that can be used to monitor TL in athletics. These measures typically assess either internal (i.e., the athlete’s psychophysiological response to training) or external TL (i.e., the actual work performed in training) (Impellizzeri et al., 2019). It is recommended that both these constructs are applied, and their relationships monitored to optimize the athlete’s training (Coyne et al., 2018; Impellizzeri et al., 2019). However, there is no consensus on the most appropriate methods for measuring external TL in athletics with the majority of research in this area focusing on internal TL or non-TL outcome measures (e.g., sprinting tests, counter movement jump) to monitor adaptations to training (Cristina-Souza et al., 2019; Haugen et al., 2019; Jimenez-Reyes et al., 2016; Suzuki et al., 2006). From this research, the most common internal TL measure was session ratings of perceived exertion (sRPE) which are also recommended as a primary TL intensity measure in team sports and used widely in endurance sports (Drew and Finch, 2016; McLaren et al., 2018; Mujika, 2017). There appears to be a relationship between sRPE-TL (the product of sRPE and session duration) (Foster et al., 2021) and performance in the sprints (Suzuki et al., 2006) where sRPE-TL using Bannister’s model predicted performance in an elite Japanese 400-m sprinter. In regards to monitoring tools that can be used with sRPE-TL, the acute-to-chronic workload ratio (Hulin et al., 2014) has been the most popular in many coaching circles, although there seems to be significant statistical concerns with its use (Impellizzeri et al., 2020). Alternatives to the acute-to-chronic workload ratio include the training stress balance (TSB) metric (Allen and Coggan, 2010), which is represented by the chronic minus the acute TL, and differential load (Lazarus et al., 2017), which is an exponential smoothing of week-to-week rate of change in TL. Both of these measures have become more common in TL monitoring research. For instance, in one recent investigation on elite weightlifting (which, like the long jump, is a similar high neuromuscular demand sport) prior to a 2016 Olympic qualification competition, the volatility of sRPE-TL TSB was significantly lower for successful performances compared to unsuccessful performances (Coyne et al., 2020b). Another item of interest when monitoring TL is the debate over the most appropriate smoothing method for TL data (Coyne et al., 2018; Williams et al., 2016). It has been suggested that simple moving averages (SMA) do not account for variations in how athletes accumulate TL or accurately represent the physiological gain or decay of “fitness” and “fatigue” (Menaspà , 2017; Williams et al., 2016). Due to these concerns, exponentially weighted moving averages (EWMA) have been proposed to be a superior alternative (Menaspà , 2017; Williams et al., 2016). However, like SMA, EWMA also has some conceptual issues with the set time constants typically used being problematic if athletes have individual “fitness” and “fatigue” gain and decay rates (Coyne et al., 2018). To compound this issue, there are also two different EWMA calculation methods predominately presented in the scientific literature (Lazarus et al., 2017; Williams et al., 2016). All these different calculation methods (SMA, EWMA variations) produce different values for the TL data, and there have been conflicting results as to which smoothing method produces TL metrics that have superior relationships to performance (Coyne et al., 2020b; 2021). It also seems common for practitioners to combine TL monitoring with athlete readiness measures that aid in acute decision-making on an athlete’s training (Coyne et al., 2018). Athlete readiness measures are measures that can infer, or are associated with, an athlete’s ability to train or perform in competition (Cullen et al., 2020). Two athlete readiness measures that have been studied in elite and college-level athletics have been heart rate variability (HRV) and direct current potential (DC) (Berkoff et al., 2007; Peterson, 2018). HRV is the variability between successive heart beats (RR interval) and is considered an indicator of the autonomic nervous system (Buchheit, 2014). Less researched is DC, which is measured through electrodes placed on the scalp or the forehead and thenar eminence and has been suggested to be an indicator of central nervous system status, is defined as very slow brainwave activity (0-0.5 Hz) and appears to be correlated with electroencephalography measures (Coyne et al., 2020a; Valenzuela et al., 2020). The autonomic and central nervous system status of athletes seems to be worthwhile for athletics coaches to be aware of to inform training (Buchheit, 2014; Peterson, 2018). Although HRV has been more studied in endurance and team sports and may also be more applicable as measures in these sports (Buchheit, 2014), in the studies assessing HRV and DC in athletics, Berkoff et al (Berkoff et al., 2007) found no difference in HRV variables of power- (e.g., sprint, long jump) or aerobic-based (e.g., 1500 m, steeplechase) athletes at the 2004 United States Track and Field Olympic trials. Meanwhile, in another study, Peterson determined that the RR interval square root of the mean squared differences (RMSSD) and DC could predict performance in NCAA Division 1 sprint competitions (Peterson, 2018). This result aligns with current recommendations for RMSSD to be the primary variable for HRV analysis (Buchheit, 2014; Plews et al., 2013). However, practitioners should be aware that there may not be a positive relationship between RMSSD and competition performance in elite athletes, which is different and even opposite to national or well-trained athletes (Plews et al., 2013; 2017). Regarding DC, the authors were unable to find any recommendations for monitoring this measure in athletes despite recent publications examining DC’s measurement characteristics (Coyne et al., 2020a; Valenzuela et al., 2020). In light of the scant research on elite long jump TL monitoring and the inability to identify the effects of training without precise quantification of TL (Mujika, 2017); further investigation in this area is justified. Therefore, the first purpose of this study was to provide descriptive data of sRPE-TL, HRV, and DC from an elite cohort of long jump athletes prior and during Olympic competition. The second purpose of this study was to investigate correlations between TL, HRV and DC with competition performance and determine if differences exist in these measures for intra-athlete successful and unsuccessful performances. Based on previous research examining sRPE-TL and competition performance (Coyne et al., 2020b; Coyne et al., 2021; Suzuki et al., 2006), we hypothesized that there would be positive correlations between sRPE-TL and intra-athlete performance and significant differences in sRPE-TL values between intra-athlete successful and unsuccessful performances. Due to the debate over the different TL smoothing methods, the final purpose of this study was to examine the three main smoothing methods used in previous literature to add to the evidence base for practitioners. |