Let’s start with a few basic questions:
What is a “New Word Spelling Test”? Basically, it is a spelling test whereby the individual is asked to spell which they are not familiar with, either because they are very rare or made-up. Thus they cannot have practiced the spelling. Hence it can be very useful in the evaluation of an individuals’ skill in the area of phonics.
So why is this important? Imagine starting a technical course where you need to learn a lot of new terminology. How do you know how to spell a word you hear in class if you do not have the basic skills learned through phonics? The answer is that you will have great difficulty, and guess is often not an option. As a consequence, you may have great natural skill in the subject, but your writing skills let you down such that your actual ability is not reflected in your exam scores.
Why does Profiler need a new New Word Spelling Test? Profiler has had several version of the New Word Spelling Test (NWST) since the start. These have evolved to suit diverse (and emerging) needs. However, there are a number of issues with such tests, the chief ones of which are:
- They do not measure a single construct – If you look at the psychometric properties of a NWST, you will find that the reliability (alpha) is not great (0.80 for UK data and 0.71 for SA data), even if you have large numbers of data points. The reason is that these tests, unless they are very long, do not measure a single construct. That is, they do not measure “phonics” per se, but a series of factors, exacerbated by the nature of the English written language. Think of it as needing a separate test for not only all the vowels, but also each of the vowels sounds. And that is before we think about the Rules of English Spelling (not to mention the exceptions), and the diversity of ways to spell a single (new) word. To have a single test that covers all that would be unworkably long.
- Multilingualism in a community leads to diverse spellings – Experience in South Africa shows that the first language of an individual can significantly affect the way they spell a word. One good example is the (new word) mip (to rhyme with dip, pip, sip, tip etc). Results showed that more SA students spell this word (spoken by a UK English person) wrote map than mip. However, consider that the pronunciation is the same (or very close to) the word used to represent a diagram that shows you how to drive from A to B – a map! So can we say that the spelling map is wrong? No! But that makes the norming process, at best, problematic. Do we make a norm for every language? No, because then there are even more complications, such as what do you expect the norm to be for a Grade 5 child whose first language is Zulu, who has been in an Afrikaans school since Grade 1 and is being tested in English?
So does this mean that because of these complications, there is no place for a NWST? It means that we have to understand the context and the use of the test, and offer information accordingly. Put another way, as a criterion-based test, it is a very useful tool.
Criteria for the new NWST
In order to ensure we have a new test that is fit-for-purpose for its wider use, I set up a series of criteria that it had to conform to. Below are these criteria, and the justification for each.
The test must be short – Teachers (and students) want quick answers. This new test will initially be open-ended but data analysis will suggest the best cut-off time for this test. Results below are for SA students, with the time taken to complete the first eight questions, which are the monosyllabic ones. The total number of questions is twice this. The initial cut-off will be 20 minutes, or 1200 seconds.
The graph below shows that SA students take almost twice as long on average to complete these first eight questions (out of 16). Ages are comparable.
It should focus on simple sound-letter correspondence – There are many different levels and combinations one could work with, some of which have been used in previous modules. But experience suggest that the results over and above the monosyllable has been little used. Therefore the intent here is to get back to basics, and check at the single letter level and to a lesser extent, a few combinations of consonants – though how do you choose which ones to use?
All words should be monosyllabic – That means there should be minimal reliance on working memory.
It should cater for diverse spellings – There can be more than one correct spelling. For example, the three spelling of fote, foat and phote all sound the same and are equally valid.
There should be a good representation of the vowels – From previous research, the vowel sound is often the most problematic, especially in multilingual cultures. Therefore there should be adequate coverage, with more emphasis on the more common vowels.
Error analysis should be offered for the most common errors – There are a limited number of errors that can be made, mostly related to vowels. However, there can be confusion between some consonants, such as p / b.
The test is criterion, not normed, based – The idea for this test is to know if the person can get the right answer, and not necessarily how well they can do the test compared to others. Therefore, norms are not required. This does not mean they will not be available in future. But not in the short term.
The order of items will be in order of difficulty – This is required so that it is possible to use an exit system, such as 6 errors and out. This cuts the time. Implementation will only be possible once a minimum data set is available.
The test should also be a form of hearing test – As far as possible, the consonants should use a wide range of sounds. Of course, without repetitions, it is not possible to be sure. But it allows the teacher / tutor to review and make informed decisions.
Future research may look at patterns of errors – At this level, the analysis is at a one to one level. That is, there is no current ability to look at, say, the errors in one vowel type, of final consonant difficulties. This will be possible in future.