FAQs

Why doesn't LASS include a measure of speed processing?

LASS does not generate any measures of ‘speed of processing’ mainly because the assessment of ‘speed of processing’ is fraught with conceptual and practical difficulties. Although the term ‘speed of processing’ is often used in the context of SEN assessment, in reality there can be no generic or general measure of ‘speed of processing’ because the test always has to measure the speed of processing some content or other. The content can be verbal, visual or both, but the results will always reflect the nature of that particular content. In other words, instead of asking “How quickly can this student process?” you actually have to ask “How quickly can this student process X?”, where ‘X’ refers to a certain type of information or a certain activity (e.g. reading). In the context of literacy this presents a problem because when people carry out a task there is always a trade-off of speed against accuracy: i.e. we can ask (encourage, force) people to do things as quickly as possible, but in doing so we will inadvertently affect the accuracy of their work. For example, if you ask a student to read something as quickly as they can, you will invariably find that comprehension is poorer than if you ask them to read at their own pace.

One possible solution is to have lots of different tests measuring the speed at which a student can do various things, and then average the results. But this would not necessarily be very helpful nor properly reflect the student’s abilities because a student may display a fast rate of processing speed at some tasks (e.g. maths) and slow rate in others (e.g. writing). Simply averaging the results would be quite misleading. You would also need quite a lot of different ‘speed of processing’ tests and most teachers would struggle to find the time to administer them on top of the other tests you need to give a student.

Another possible solution is to use a very easy, low-level cognitive task and see how fast the student can do that. Unfortunately this speed may bear little or no relation to the student’s rate of work in real educational activities they are confronted with, because speed of processing is also limited by the student’s basic skills: e.g. if the student can’t spell very well, their rate of writing is likely to be slow because they have to spend a lot of time trying to work out the spellings of the words they want to use..

Consequently, any assessment of ‘speed of processing’ is bound to be biased or limited in various ways. In the educational context, however, the important thing is whether a student can work at the rate of the rest of the class, because if they can’t they will inevitably fall behind. In practice, the best course of action therefore is to make a professional judgement based on observing the student, whether their rate of work is generally much slower than that of most other students.

In single word reading the student has only got one or two answers wrong but their centile is very low, why is this?

In these assessments in particular it is possible for a pupil to get an unexpectedly low centile score even if he/she only got one or two test items wrong.

Although this may be alarming it is statistically accurate (from the original norming sample) and reflects what is known as a ceiling effect. In simple terms, nearly all of the pupils in the original standardisation sample got all the items right.

Whats are Z scores?

Z scores are also known as ‘standard deviation units’. To understand them you need to know a little bit about statistics. For example, a z score (or standard deviation) of +1.0 signifies that the student’s raw score was one standard deviation above the mean of the statistical population on which the test was standardised, and a z score of –0.5 signifies that the student’s raw score was one standard deviation below the mean of that population. Z scores can be converted to standard scores (which have a mean or average of 100 and a standard deviation of 15) by multiplying the z score by 15 and either adding it to 100 if the z score is positive or subtracting it from 100 if the z score is negative. For example, a z score of –0.67 is equivalent to a standard score of 100 –(0.67 x 15) = 90.

Why choose assessment using a computer?

Lucid Research’s computer-based assessment products offer many advantages over conventional assessment methods (i.e. assessment delivered by the teacher), including:

  • Consistent and precise presentation
  • Improved accuracy of measurement
  • Speedier administration (especially adaptive tests as in LASS)
  • Little training needed to administer (since the computer does most of the work!)
  • Labour (and cost) saving
  • No need to spend time scoring results and looking up tables of norms
  • Results available instantly (computer-based assessments produced by some other companies do not have this benefit)
  • Enjoyable for children (many children, especially those with poor performance greatly prefer computerised assessment to conventional assessment)
  • Confidential for adult self-assessment

Reference

Singleton, C.H. (2001) Computer-based assessment in education. Educational and Child Psychology, 18, 58-74.

Why choose LASS?

LASS is a fully-standardised suite of computerised tests with national norms that is used in over 6,000 schools in the UK and elsewhere in the world. It has an excellent, well-established reputation and is produced by Lucid Research Ltd, which is the leading developer in the world of specialist assessment software for education. Lucid’s products were quoted as example of good practice in the House of Commons Education and Skills Committee report on SEN in 2006. Lucid’s products are backed by high quality scientific research that is published in international peer-reviewed journals and have been adopted by local authorities and prominent national educational projects run by the British Dyslexia Association and other organisations.

References

House of Commons Education and Skills Committee Special Educational Needs Report.

Third Report of Session 2005-2006, Volume 2, Oral and written evidence EV 100, 101, 114 and 115. London: HMSO.

Cowieson, A. (2009) North Ayrshire’s developing approach to identifying and meeting the needs of learners with dyslexia. The Dyslexia Handbook 2009/10. Reading, Berks: British Dyslexia Association, 53-63.

Singleton, C.H. (2009) No To Failure: the results of the intervention study. The Dyslexia Handbook 2009/10. Reading, Berks: British Dyslexia Association, 21-29.

What evidence is there that the tests in LASS are valid are reliable?

The validity and reliability of LASS has been fully investigated by Horne (2002). She found that all the tests in LASS correlated significantly with equivalent established conventional tests such as the Phonological Assessment Battery, the Wechsler Memory Scales, the British Spelling Test Series, NFER Reading Tests, and the Matrix Analogies Test of Non-Verbal Reasoning. In a separate study, Horne found that when pupils were retested on LASS after an interval of four weeks, the test-retest reliabilities were mainly in the region of 0.8 – 0.9 and all highly significant. These studies have therefore demonstrated that the tests in LASS all meet psychometric standards regarding validity and reliability.

References

Horne, J.K. (2002) Development and evaluation of computer-based techniques for assessing children in educational settings. Unpublished Ph.D. thesis. Hull: University of Hull.

Don't computer-based tests favour boys over girls?

This is a myth. Computer-based tests do not, in general, favour one gender or the other. In a number of studies investigating gender differences in assessment methods, Horne (2007) found less gender bias in computer-based tests than in conventional tests. Comparing LASS with equivalent conventional tests, she found that only one of the tests in LASS showed a significant difference between the performances of boys and girls, and that was the LASS Spelling test, in which girls, on average, tended to score slightly higher. However, this gender difference was also found with the conventionally administered tests of spelling. Interestingly, pen-and-paper versions of the tests in LASS showed greater gender differences than the computerised versions, and all in favour of females. So there is good evidence that computer-based tests are fairer, as well as being more consistent and objective, than conventional tests.

References

Horne, J.K. (2007) Gender differences in computerised and conventional educational tests. Journal of Computer Assisted Learning, Vol 23, Issue 1, 47-55. 

What evidence is there that the tests in LASS can help to identify children with dyslexia?

Horne (2002) compared LASS with a battery of conventional tests including the Phonological Assessment Battery, the Wechsler Memory Scales, the British Spelling Test Series, NFER Reading Tests, and the Matrix Analogies Test of Non-Verbal Reasoning. Using purely objective criteria of discrepancy between literacy measures, cognitive measures and overall intelligence, she found that overall, LASS had a 79% success rate in identifying dyslexia while the battery of conventional tests had only a 63% success rate. LASS was also better, in general, at distinguishing SEN from non-SEN children. Note that this approach does not take advantage of the facility to compare performance on the different tests in LASS (all of which have been standardised on the same population), which allows the teacher to take account of individual strengths and weaknesses, and enables the identification accuracy of LASS to be significantly improved.

Note also that while administration of LASS took approximately 45 minutes per student administration of the conventional tests took over 2½ hours per student. Further time-savings can be achieved by administering LASS to groups of students using networked computers.

References

Horne, J.K. (2002) Development and evaluation of computer-based techniques for assessing children in educational settings. Unpublished Ph.D. thesis. Hull: University of Hull .

Singleton, C. H. (2007) Computerised screening and assessment for dyslexia. The Dyslexia Handbook 2007, Reading, Berks: British Dyslexia Association, 193-197.

How easy is it to interpret LASS results?

LASS results are shown as a graphical profile (a bar chart) and as a summary table of results that shows centile scores, age equivalents and whether any scores are significantly lower than would be predicted from the student’s non-verbal intelligence. It is therefore very easy to see, at a glance, in what areas the student is below his or her peers or below that expected from their general ability. The graphical profile is also particularly useful for providing feedback to other teachers who are not familiar with the program and is a helpful way to convey results to parents.

Using any new approach inevitably involves some learning, but much will depend on the individual teacher’s experience, confidence and degree of familiarity with the profile approach to interpreting test results. With practice, most teachers invariably become experts at interpreting LASS results.

On our website we have also supplied video guides to interpretation, previous case studies and interpretation guides to assist you.

References

Singleton, C.H. (2001) Using computer-based assessment to identify learning problems. In L. Florian and J. Hegarty (Eds) ICT and Special Educational Needs, Open University Press, 2004, pp. 46-63.

Why does LASS give a profile of results rather than a probability of dyslexia?

Tests that give a probability of dyslexia are specifically designed to screen for dyslexia. Lucid’s product ‘Lucid Rapid Dyslexia Screening’ does just that. However, LASS is a broader, more comprehensive assessment tool, which not only helps teachers to identify dyslexia, but also to identify other learning problems as well. The profile approach enables teachers to appreciate students’ strengths as well as their limitations, which both have educational implications. For example, LASS makes it very straightforward to see if a student’s problems (e.g. in reading or spelling) might be due to general low ability, to memory limitations, or to difficulties of a dyslexic nature. This information then helps the teacher to devise more effective strategies and interventions to support the student. The LASS Teachers Manual and the Lucid website contain many helpful case studies and teaching suggestions to assist interpretation and support.

What are the main differences between screening and assessment?

Who can administer LASS and interpret LASS results?

Because LASS is very straightforward to administer and the computer does most of the work, almost any competent adult can administer LASS with minimal training and by following guidance in the manual. So it does not have to be a teacher – it could be a teaching assistant, for example. However, interpreting LASS results does demand professional educational skills, and so should be left to a qualified teacher. For this reason, LASS is only available for purchase by schools, qualified teachers, other educational institutions and some other professionals connected with education (e.g. speech therapy, career guidance). Interpretation of LASS results is often carried out by SEN teacher or SENCo, but it does not necessarily have to be a teacher in that position. However, it is essential that users read the LASS Teachers Manual before attempting to interpret results. There are also helpful case studies and teaching suggestions in the manual and on the Lucid website that can assist interpretation and support.

Which product should I choose - LASS or Lucid Rapid Dyslexia Screening?

It depends what you want to achieve. If all you want is to be able to identify which students are likely to have dyslexia, as speedily and as reliability as possible, then the best product is Lucid Rapid Dyslexia Screening. However, if you want to obtain a fuller picture of students’ learning, their strengths as well as their limitations, and also to uncover the reasons why a student may be experiencing particular problems in learning, then LASS is the best product to use. LASS takes longer to administer than Lucid Rapid (about 45 minutes compared with 15 minutes), but since the tests are delivered by the computer and do not require intensive supervision, most schools do not find that a problem.

LASS 8-11 and 11-15 Age Equivalents Tables