The Michigan English Placement Test (Michigan EPT) is a computer-based test designed to quickly and reliably place ESL students into homogeneous ability levels. It provides an accurate assessment of a test taker’s general receptive language proficiency by measuring performance in the following key skill areas:
- listening comprehension
- grammatical knowledge
- vocabulary range
- reading comprehension
Teachers and program administrators will be able to confidently place ESL students into appropriate levels and classes based on a Michigan EPT score. The number of ability levels and the associated cut scores an institution sets will depend on the English language program in which the Michigan EPT is used.
Michigan EPT forms D, E, F, G, H, and I are parallel forms of the test. A score conversion table is available to compare scores on one form of the Michigan EPT to scores on any of the other five forms, as well as to compare scores from the original Form A, if necessary.
Specific cut scores must be determined in the local context. Michigan Language Assessment strongly recommends that only overall scores are used for placement decisions. Separate section scores (listening, grammar, vocabulary, and reading) are not recommended for placement decisions.
Michigan Language Assessment developed its English Placement Test forms in response to user feedback on the original EPT, which was created at the University of Michigan. The feedback from users indicated a strong demand for a highly reliable placement test that could be administered quickly and easily and that allowed teachers to stream students into appropriate ESL courses.
Users of the EPT also wanted new content. In response, Michigan Language Assessment developed six new EPT forms. Additionally, the number of items on the test—as well as the overall administration time—was reduced from the original EPT forms. However, the same high level of reliability people came to expect with the original EPT forms was maintained.
The first of these new forms were forms D, E, and F. The 240 items that appear on these forms went through an intensive pilot testing process in 2012. More than 400 items were pilot tested on 573 students at thirteen different institutions—universities, community colleges, and language schools—across North America. Select items from the original EPT Form A were embedded in the pilot test; additionally, a subset of the pilot test population took the original EPT Form A to create the score conversion data between the original EPT and forms D, E, and F.
After the piloting process, the performance of the items for forms D, E, and F was analyzed. Items that performed poorly were rejected. Items that performed strongly were compiled into forms D, E, and F using item response theory, ensuring that the three forms are equivalent in difficulty.
In order to increase content while still maintaining parallel forms, development of additional forms G, H, and I began in 2014. The 240 items that appear on forms G, H, and I went through an intensive pilot-testing process similar to forms D, E, and F. Over 400 items were pilot tested on 382 students at numerous institutions across nine different countries. Items from the D, E, and F pilot forms were embedded onto the G, H, and I pilot forms to link the two datasets.
After the piloting process, the performance of the items for forms G, H, and I was analyzed. Items that performed poorly were rejected, while items that performed strongly were compiled into forms G, H, and I using item response theory to ensure that the three forms are equivalent in difficulty. The Test Information Functions for forms D, E, F, G, H, and I were examined to ensure that the test forms are equivalent in difficulty.
Forms D, E, F, G, H, and I of the Michigan EPT have proven to be equivalent in difficulty. Additionally, their format is identical, but the content on each form is completely unique. Thus, users of the Michigan EPT can use any of the six forms and be assured of high reliability across forms.