I published a series of articles about quality assurance (QA) developing educational programs. However, QA doesn’t end after a curriculum is put into production. To continuously improve the curriculum, you need to conduct periodic revisions. Even if a curriculum were error-free when it was launched—and this is a hypothetical scenario because it never is—it would still require periodic revisions to protect against obsolescence and ensure that it includes the latest developments in the body of knowledge.
While the curriculum is in production, you need feedback loops on its quality to determine what revisions are required. There are a few tools for gathering this feedback.
Some errors in the content are not discovered until after the courseware is put into production. It’s critical to document the errors when they’re discovered because they are typically not corrected until sometime after they’re discovered.
The ideal way to document content errors is via a Web-based form that stores them in a database. The form should gather information like the specific content that is in error, the precise location of the content, what is erroneous about it, and what changes are required to correct the error. The form should be made accessible to learners and learners should be encouraged to report any content they believe is erroneous. Learners are likely to discover more erroneous content than any other stakeholders will, including the QA team, once they begin studying the curriculum.
However, learners are not always accurate at identifying erroneous content. Sometimes they think content contains an error when it actually does not. So before correcting content based on reports from learners, a subject matter expert (SME) should review the feedback to verify that it’s valid. If the error is valid, the instructional designer should prioritize the correction. Some errors need to be corrected urgently while others are trivial. The database should also store these results of the validation and prioritization.
This allows a batch of corrections to be made all in one project. Batch processing is a much more efficient way to correct content than trying to make every correction individually as soon as an error is discovered. Because some errors might be corrected in a different batch than others due to their priority, the content correction database should also allow the developers to flag an error as having been corrected.
Sometimes there is room for improvement of courseware even if it’s technically not erroneous. Various stakeholders often have ideas on how the courseware’s UI can be improved or how content can be presented more effectively. This kind of feedback when not about a specific error is essentially a suggestion of how the curriculum can be enhanced. It is just as valuable as feedback about erroneous content and enhancement requests can come from SMEs, instructional designers, or learners. Therefore, it’s important to make it easy for the stakeholders to submit an enhancement request, so a Web-based form is again a useful way to do it. However, enhancement requests are typically more open-ended and less specific than a content correction, so the form for an enhancement request can be simpler than the one for reporting erroneous content. Nonetheless, it’s still a good idea for the instructional designer to validate and prioritize enhancement requests. The enhancements can usually be implemented in the same project that a batch of corrections are made.
Job/task analysis (study of practice)
Some activities change over time. When the way something is done changes, courseware developed to train people to perform that activity becomes obsolete. It needs to be revised to account for the changes.
To determine if the way an activity is performed changes, the organization conducts a job analysis. Sometimes it’s called a task analysis when the study focuses on a discrete task. Sometimes it’s referred to as a study of practice when the scope covers an entire practice. Regardless of the scope of the study, the activity is analyzed periodically. The frequency of the analysis depends on the rate at which the activity changes. When a job analysis uncovers an activity that has changed since the prior analysis, it indicates the need for a revision and identifies the lessons that will require updated content.
It’s important to keep track of each version of a curriculum developed and know which one is currently in production. There are two types of revisions:
- Minor: This is the more common revision made. It is used to correct inaccurate content or make other minor changes to the courseware. The primary source of feedback for a minor revision comes from content corrections. It is less common to implement enhancement requests in minor revisions because the enhancements are usually not minor. A specific minor revision is identified by a decimal point and subsequent minor revisions are incremented by a tenth (i.e. version 2.2 followed by 2.3).
- Major: Most courseware will have no more than two major revisions in the course of its lifecycle. It is used to make major changes to the courseware, whether it be an addition of a substantial section of content or an enhancement to the UI. Although you should correct any content known to be inaccurate, the more important sources of information driving a major revision would be an enhancement request or a significant change in the study of practice. A major revision is denoted with an integer, such as version 2. The .0 is implied and, since version 1.0 is the first version of a curriculum put into production, it is not considered a revision. There are rarely nine minor revisions preceding each major revision, so version 2 is more likely to follow version 1.5 than it is to follow version 1.9.
You should create a formal revision schedule for all your curricula. That way, your revisions are on the calendar well in advance and you know when slack resources will be required to work on them. When I managed the development of accredited continuing education, we would schedule major revisions every two years because it was a requirement of our accrediting institution. We would routinely conduct a study of practice every two years to see how it changed over time and use our findings to inform our revision. We scheduled minor revisions for every six months. This is a fairly typical revision schedule but different business needs could make a schedule with a different revision frequency better for your employer or client.
There are exceptions to the rule. Just because a revision comes up on the calendar doesn’t necessarily mean that the courseware must be revised. The revision schedule dictates how frequently you review the curriculum and QA feedback. However, if you find that the changes needed at a particular point in the schedule are insignificant, then the revision can be skipped and assessed again at the next point in the schedule. There is a significant amount of overhead activity required to put a revision into production, even if it’s a minor one, so it doesn’t make good business sense to go through the effort for trivial changes. And if an inaccuracy that could potentially put the health or safety of someone in jeopardy is discovered in content, it should be corrected immediately rather than waiting ‘til the next scheduled revision.
Gathering quality feedback about curricula in production is critical to maintaining a high-quality curriculum. Revising the curricula on a formal revision schedule helps you to continuously improve the quality in a cost-effective and manageable manner.