Once a student has been shown the process and at this point would be expected to have understood the purpose of the exercise, it’s time for them to have a bit of a play to see if they can implement what they’ve learnt.
In a software simulation scenario, you’ll want to try and make the training exercise as close to identical to the real-world environment as you can. However, it’s important that you give your trainees a bit of a helping hand if they start to stray off the path. The best way to do this is to give visual and/or audio feedback as they move around the interface. As they roll over an active area, consider presenting a hint as a pop-up message, perhaps with a sound attached. Incorrect actions/clicks/keypresses etc should also trigger error messages in the same way, with encouraging overtones.
Live scenarios are obviously more difficult to enact in an e-learning environment, but if the demonstration section of the module was correctly framed, trainees should be well aware that this is the case. For instance, it’s unlikely that a vehicle-driving simulation could ever replace the action of physically driving a car, however it’s an acceptable replacement for training purposes.
Nonetheless, images, audio and video combined with interactive elements can provide a rich environment in which trainees can learn about the task at hand, and provide solid feedback to the e-learning reporting system.
Testing / Assessment
Typically the way that you’d assess a student’s competency would be to set them a similar (but not identical) task to the ones covered within the demonstration and training sections of your e-learning module. I suggest the non-identical approach because all too often trainees can memorise answers without retaining the logic behind the answer. This can result in false-positives and skew training analytics as well as practical outcomes.
Exercises where the trainee interacts in a way that fits intuitively into a defined workflow will always result in not only better analytics and KPIs, but also have a much better chance of embedding routines and natural logic – with longer-term benefits.
Exercises where there is little challenge risk being sidelined by the trainee and will engender apathetic responses. This position is difficult to recover from. Better research and planning could prevent this.
Although quizzing is a part of an e-learning assessment experience, it’s often set apart from the more engaging and interactive section by the typical user interface presented. Multiple-choice questions with checkboxes, true/false questions with radio buttons, fill-in-the-blank questions etc. All basically form elements which present mainly text-based questions in a fairly dry format.
Personally I find this kind of competency assessment dull and boring, but I feel that in some cases, it really is the only way to present the information. It is important, however, to ensure that the presentation of possible answers is set in a random fashion, and also the questions themselves should be drawn from a pool of questions on identical topics. This makes sure that trainees can’t pass on answers accurately to other participants, and also ensures that higher levels of competency are achieved.