Digital testing and paper-based testing accomplish similar goals, but there are necessary differences between the two formats. As Michigan Language Assessment prepares to launch a digital version of MET Go! in April, we reflect on the differences and similarities between the two formats.
Many learners today are digital natives, as adept at swiping and scrolling as they are at turning pages. Having digital testing (in addition to paper-based testing) allows them to choose the modality they prefer for demonstrating their English language proficiency.
Of course, the two delivery formats cannot be identical in every respect. Clicking on a mouse to navigate between pages on a screen is qualitatively different from going through a test booklet. The amount of information one can see at once is different.
In some cases, differences between the two formats may be construct-relevant; those differences are related to the very thing we’re trying to measure: English language proficiency. To the extent possible, the tasks need to be made similar or comparable in digital and paper tests. Take the example of a listening comprehension test. If the questions in one modality are presented to the test takers while they are listening to the stimulus, whereas in the other modality the questions are not presented to the test taker until they finish listening to the stimulus, what is tested by each of these listening comprehension tests may change. Getting the answer correct may depend more on memory in the latter than in the former.
As for the writing section of the tests, typing one’s response is different from writing it by hand, potentially affecting how raters grade samples from different modalities. However, much research has shown that with proper training of raters, score outcomes across both modalities are comparable. The same is true for speaking: the human-delivered tests are highly standardized through the script which tells examiners what to say, so translating the test into a digitally delivered one means the change to the essential nature of the test is minimal.
When creating the digital versions of our tests, we thought through issues such as those discussed above in order to make digital and paper as similar as we can. In any event, making each test available in both modalities allows test takers to select the format they are most comfortable using.
On the other hand, some differences between paper and digital modalities have to do with things that are construct-irrelevant, i.e., they do not affect the essential nature of what is being tested. For example, font size should not have an effect on a person’s English language proficiency. Thus, accommodations such as large print booklets can be requested in paper-based testing. In digital testing, accessibility features such as font size are already built in to the testing platform, allowing all test takers to make adjustments that suit them. Color settings can also be modified to make test content easier to see on screen. Differences on construct-irrelevant matters are insignificant and can be a good kind of difference, allowing us to maximize the possibilities offered by each medium.
One advantage of digital-based testing is that test takers’ responses are transmitted instantaneously to Michigan Language Assessment for scoring and for security checks, which allows us to provide results quickly. For some test takers, this can be an important, positive difference. Receiving results in a timely manner allows test takers to process feedback and suggestions for further learning more efficiently.
We look forward to serving you with these new digital tests that are the same as the paper-based exams where they need to be the same, and different where they need to be different. With these two great options to measure proficiency, we are better able to help learners prove their English, achieve their goals, and own their futures.