Project category: cooperation for innovation
Project outcome: development of adaptive digital placement tests for languages
Determining level of existing language competence in potential learners is of utmost importance in developing individual education strategy. This importance can be demonstrated through simple statistics: during the last three academic years, between 40 and 45 percent of new students (i.e. students who have never previously take a course with us) enrolled entry level (A1.1). This information indicates that most of our students come to us with some form of language competence. Since practically the entire world adopted CEFR more than 10 years ago and most language courses are aligned with this framework, it is neccessary to match existing language competence with one of the CEFR levels (A1 to C2).
Placement test for determining this competence have been used for as long as there have been language schools, so the concept of these tests is not a novel idea. However, testing practices have not significantly changed for decades, while information technology allowing certain improvements has evolved dramatically. There have been some advances in pre-testing technology – in the beginning, placement tests were explicitly linear: a potential student would go through a set number of questions and his/her language competence would be determined by taking a single score on a scale from 1 to n (where n is a total number of questions). This linear approach certainly had certain effect, but it had just as many disadvantages that had a negative impact on the final result. Jantar, for example, compensated for these disadvantages by analyzing specific groups of questions within the test, creating our own scoring system based on years of previous experience. If results were still unclear, an additional oral test was administered by one of our teachers, to pinpoint a specific language level.
Some institutions, like Cambridge Assessment English, started developing their own digital tests over the last few years, but these are only partially adaptive and available exclusively for English language. Similar products were made by some publishers (such as Pearson or Oxford University Press) but these tests are, once again, limited to English language and have very low level of adaptivity.
Through this project, Jantar will develop our own innovative and fully adaptive tests for determining existing level of English, German, French, Italian, Spanish and Russian language in our potential students. Unique algorithms for question selection and progressive determination of language competence will be developed by an IT company from Split, Amber IT Solutions, which will be the first company to implement complex mathematical models used in, for example, ranking of chess players. These extremely advanced algorithms are currently not used by any test on the market.
In addition to Jantar, language methodology part of the project will be co-developed by British School Pisa from Italy and Blackbird from Serbia, renowned language schools with implemented quality assurance systems. Additional support is provided by Molehill from Spain, a holding company with shares in multiple language schools across the globe. Our aim is to create unique tests which will determine existing language level of our potential students with surgical precision, ensuring enrollment in adequate language courses. Through this process we aim to eliminate potential drop-outs from language courses as a result of motivation loss due to enrollment in a course where one does not belong according to their existing language competence.
1st transnational meeting
Host: Amber IT Solutions – Split, Croatia
Unfortunately, covid pandemic forced us to hold our first transnational meeting online instead of meeting in Split. However, thanks to advances in IT and competences of our project team, our meeting was organized without any difficulties. Over two full days of joint brainstorming, we successfully defined all details necessary for commencement of our development process.