Testing is used a lot at all levels of sport – sometimes for the right reasons, and sometimes for the wrong reasons. There are a whole host of reasons why a coach may employ testing of their athlete, and there are also a wide range of tests that are available to the coach to use. In this article, I will examine the pros and cons of testing, the reasons behind it all, and the selection of tests that may or may not be useful.
The first things to discuss are the reasons behind testing. When deciding to test an athlete, it is important to consider why you are doing so. In my opinion, the main reason behind testing an athlete is to monitor the training response – are the changes you are putting in place having the desired effect, and improving the athlete’s performance? Their performance on the battery of tests used can provide some very valuable feedback on this, and utilised at the right time can allow for changes in the training programme to be made to reflect the current shortcomings of the athlete. In the same vein, tests can be utilised to see where the athlete is at that very moment in time, as a snapshot of their current state. This allows the collection of data that could be very useful – for example if you test an athlete at a flying 30m, you can compare that to his race performance during that year to see if there is any correlation between that athletes flying 30m testing performance and their 100m performance. If you decide that there is, then you can use the flying 30m during training blocks to see how close that athlete is to their peak condition, or even if they are exceeding it.
The correct application of testing can also be used to increase the motivation of an athlete you are working with. Athletes generally respond very well to competition and pressure, and the use of tests can motivate them to perform both above and beyond their typical training performance. If athletes know a particular test is coming up, they can focus on performing well on that. For example, regular use of body composition testing ensured that I took my diet seriously, and made a concerted effort to lose fat and score well on the test (whether this was actually positive or not is up for debate!).
As I mentioned in the previous point, athletes usually respond well to pressure and competition, and there is an importance that they can do this. Competition, especially at higher levels, tends to exert large amounts of pressures on athletes, and these athletes need to have the mental resilience to be able to cope with and thrive under this pressure. The timely application of performance based tests can serve to place these athletes under simulated competition pressures, and enable the coach and support staff to identify people that perhaps need a bit of a helping hand at dealing with this pressure. Mental skills training is something that is often overlooked, but mental skills are just as trainable as physiological aspects, and so monitoring this side of the athlete’s performance is important.
Why Not Test?
So far, it seems like there are some very good reasons for coaches to use testing as part of their training design and monitoring programmes. However, for each of the points discussed in the previous section, there are counter-points which need to be considered when deciding whether or not to test an athlete. Firstly, whilst tests can be used to monitor the training response, this is only true if the test is both valid and reliable. I will look into these concepts a bit later on, so don’t want to dwell too much on it at this point, but essentially validity means that the test measures what it claims to measure (and then the coach needs to decide whether what the test measures is actually important), and reliability means that if the test were to be repeated, the results would be the same. So, as an example, if you use one repetition maximum testing to monitor how well your athletes are responding to a training programme, this could give you a mixed bag of results. If your athletes improve their 1RM, what does this actually mean? It could mean that they are stronger. It could mean that they are more powerful. It could mean that their lifting technique has improved (this is especially true in highly technical lifts like the snatch and power clean). Does any of this mean that the athlete actually performs better in their event? Well, no, not really. A stronger athlete is not necessarily a faster one, and an athlete with better lifting technique is almost definitely not a faster one. This leads to the creation of surrogate markers, or markers that the coach or athlete use to monitor improvements in place of actually seeing if the athlete has improved in their event. We see this quite often, especially in the UK with regards to power cleans. Often, as athletes lift more weight in the power clean, they expect to run faster. The end result is that athletes can chase power clean improvements in the mistaken belief that these improvements are correlated with race performance, whilst in truth the correlation is probably weaker than they might have thought. So, misleading test data, through the poor selection of tests, can lead to improper variables being monitored and measured, giving a poorer indication of race performance.
Another aspect to consider is that, whilst tests may increase athlete motivation, they can also be incredibly stressful for the athlete. This effect can be further compounded if the athlete is using a large amount of surrogate markers in a bid to monitor how fast they are going to run in upcoming races. In these situations, testing takes on an even greater importance in the mind of the athlete, and performance in these tests becomes the main focus. What if an athlete performs sub-par in these tests? How will you enable them to bounce back? Can they regain their confidence? All things to consider when deciding on whether or not to utilise testing.
What to test?
Let’s be honest here – the only real test that holds the upmost value is when your athlete lines up on the start line, is started by a gun, and is timed on how long it takes for their chest to cross the finish line. If the time that elapses is shorter than the athlete has ever done before, they have improved; if its longer, they haven’t. Obviously, this is affected by environmental factors, such as wind and altitude. And also, at the elite level, running a personal best is actually less important than just running quicker than everyone else; arguably, in major championships your actual performance relative to your previous performance doesn’t matter, all that matters is your performance relative to other people.
Now we have got the obvious out of the way, we can discuss the actual tests that are used. It seems logical that we would want tests that would mirror very closely the competitive requirements of that event; so for sprinting we might test start performance, acceleration performance, maximum velocity performance, and speed endurance performance. As such, some tests that we might conduct include 10m from blocks (and thus test reaction time and start performance), 30m from blocks (reaction time and acceleration), and flying 30m performance (maximum velocity). These are pretty common performance tests, and they mirror quite nicely to demands of the sprint event. A test for speed endurance is a bit more difficult – in the past I have used time trials over 100m, 150m and 200m, or repeated sprint ability over 100m/150m, or even a double flying 30 (30 acceleration, 30 flying, 20 float, 30 flying). I liked the double flying 30 as you could compare the differences between the two flying 30 segments, but it was incredibly tough!
The above are all pretty specific performance measures, and can also be quite accurately measured – you can buy electronic timing very cheaply these days. Electronic timing is more reliable than hand timing, which depends on human factors, and can also differ between individuals. One small downside of electronic timing is that it is quite easy to cheat – I could throw my arm out in front of me to break the beam, for example.
We all know, of course, that these aren’t the only tests used. Quite often standing long jump is used to measure power. If you improve your standing long jump, it could well mean you’re more powerful, but would that improvement pass over into improvements in sprint performance? What about weight-room activities? If you improve your back squat personal best, you’re clearly stronger – but are you faster? Same with medicine ball throws – I’ve never had to throw a ball in a 100m race, so I can’t say with much certainty that adding 2m to my medball throw personal best will lead to any improvements in my 100m race performance. In fact, all I can say is that my medicine ball throw is better. This comes back to those surrogate markers I discussed earlier; markers that we use in place of actual performance data. Take VO2 max for example. Improvements in VO2max might be ideal, but VO2max in and of itself isn’t an event. So, improvements in VO2max aren’t really all that useful is event performance (say 5000m race) decreases. There is actually an interesting study from Andrew Jones looking at Paula Radcliffe’s VO2max against her 3000m performance, and the faster her 3000m race times became, the worse her VO2max score was. This illustrates nicely that direct performance measures are far more valuable than these surrogate markers. A similar example in the sprints might be that of standing long jump.
Even selecting the right performance tests can lead to misleading data. In 2005 I ran a flying 30m personal best of 2.95 seconds, and ran the 100m in 10.22. In 2010 I ran a 2.78 flying 30, and clocked a seasons best of 10.38 in the 100m. Even between members of your training group, tests can give misleading data. During my indoors season in 2007, out of the three sprinters I had the slowest times to 10m and 30m from blocks in training, but the 2nd fastest 60m time in the world that year.
The more you test, the more stressful this can become for the athlete, because they know they are being tested, and they also perceive that if they perform poorly on this test then it looks bleak for their upcoming competitive season. The first time anyone does a test is lovely, because you have no data to compare it to. The second time you are likely to improve just as a matter of practice. But the tenth time? The twentieth time? Now it is starting to become more stressful. Testing is also really tiring – it involves a maximum effort, so too much testing in too short a period of time can cause problems. Tests involving external loads, such as 1RM testing, can also place the athlete under greater loads than they have ever experienced before – and so you have no idea how they will respond to this! It would be bad news if your athlete ended up injured because of the testing process.
What about timing of the test? As discussed, testing can be risky, both in terms of fatigue and injury. It might be a good idea to avoid testing close to competition. From a psychological point of view, it can be very tempting to test close to competition, as a good result here could increase the athletes confidence at the race; I know from experience that having run a flying 30m personal best 5 days before a big race that it can make you feel really good. But if the test doesn’t go well? How will the athlete bounce back? Again, having run a reasonably slow flying 30m a few days before a race, I speak from experience when I say it can be a weight on your mind. The best time to test is determined by your reasons for testing – if you just want a base line, then test at the start and end of each training block. If you want to collect information on what your athletes training performance looks like when they are in really good (or bad) shape, then testing closer to competitions might be a bit more useful. Testing can also increase motivation for the athlete, so a well-timed testing session can result in improved training performance.
What do I mean by testing? Traditionally, testing refers to a specific session or group of sessions in which athletes can prepare for, and data is collected. But what happens when the electronic timing gates start coming out on a regular basis? Is this testing? It would be naïve to think that athletes don’t pay much attention to their times in these sessions, and compare them to their best ever. Whilst you might think it reasonable and understandable for an athlete’s flying 30m performance to drop off a tenth or so in a training block, is the athlete mature enough to place it all into context? I know I wasn’t. And this then further increases the performance for a good performance next time, when fatigue might be even higher, creating a downwards spiral.
In sports outside of track and field, testing should also take into account positional differences. Another sport in which I was involved, bobsleigh, uses a specific set of performance measures in squad selection. One of these is a trolley push over 45m, with 15m-45m (i.e. 30m) timed, and comparison across individuals made. Consider for a second the individual performance requirements within a 4-man bobsleigh; the pilot tends to push for about 20-25m. The second man generally won’t push for more than 30-35m, whilst the last man (number 4) could conceivably push for the full 45m in competition. Is it therefore fair to hold all the different positions to the same standard – especially when what makes a good second man is not that same as what makes a good fourth man? Will the second man’s performance suffer in training for a good test performance, when instead he should be focusing primarily on the first 25m? Again, I’m not sure of the answer, but it’s certainly something to consider with squad based testing. Similarly in soccer, you wouldn’t expect a goalkeeper to outperform a full-back in a repeated run test, so why subject them to the same test?
In conclusion, it might seem like I am against testing. I’m not. I think testing is a powerful tool, especially for testing the mental ability and resilience of your athletes through graded exposure to stress. I do, however, think it can often be overused, or used for the wrong reasons, or even incorrectly. Hopefully this article has given you some questions to consider when it comes to testing, so that the performance of your athletes can be enhanced.