If you are engaged in any form of professional communication, at some point you will find yourself doing one or more of the tasks editing, grading, and evaluating writing. I have found it helpful to be intentionally aware when I am doing these tasks so I can get the most from the experience. After all, every experience in communication–be it written or oral—-is an opportunity to learn something new.
Here is a summation of the tasks and accompanied with insights I’ve gained about them:
The editing process kicks in fresh from the afterglow of creation; that is, from a piece of writing having gone through the brainstorming stages to the early draft stage and then to being a viable draft ready for objective review. Editing, though, does not begin objectively. It actually requires that the editor approach the work with some sense of creativity and openness because to properly edit something one must become acquainted with the general premise and contents.
For this reason, good editing calls for being a very active reader first–allowing oneself to get on good terms with the writing by skimming, making notes, and briefly reflecting on what’s been written. In this sense, editing is a bit of a recap of the early writing process.
Once a level of comfort emerges, the editor can then proceed in a more objective and detached manner, moving through the document and providing preliminary feedback. This can be done using a Word or Word-like Review function or some equivalent method that involves highlighting passages and making comments. It is oftentimes during this stage that small, medium, and large scale revisions are made as clarity emerges about the larger purpose and intent of the writing.
Once the preliminary review is completed, the next step is the line-by-line edit.
A line-by-line edit is all the tediousness it implies–literally every sentence is closely scrutinized. Precision and conciseness is the objective. This stage is also the inevitable transition to actual proofreading. Proofreading is–must always be–deeply left brain. The emphasis on precision and conciseness should move to a metaphysical level at that point.
The grading of writing is more art than science. While certainly there is the science element of having a rubric to assist in determining a numerical value that translates to a letter grade, the art comes about from the needed intuitive qualities required to determine how to help the student improve.
Yes, that should be the goal in grading writing–always.
To accomplish this the grader needs to do a preliminary review of the writing much as one does in the editing process. But in the next review, rather than doing a line-by-line edit, the grader identifies and isolates patterns of errors. The grader should call attention to a representative sample of problems found in the writing, make corrections, but then also direct the writer to information that can help her or him review and apply in future writing assignments. I believe all grading of writing should take this developmental approach. Students should be empowered, not cowered, by the professor’s red pen.
Rule number 1: the evaluating of writing is not the grading of writing. Rule number 2: refer to rule number 1.
The evaluating of writing usually occurs in an institutional setting for research or may occur as an entry requirement for a job or a school. However it occurs, the evaluator should–unlike the editor or grader–have some very strict parameters to follow. Evaluation of writing should be highly impersonal and there should be specific, rubric-guided qualities being sought in the writing. And here is one more “should”: the purpose for evaluating a person’s writing should be explained upfront and in detail to that person. In many ways this is like submitting to a survey or being recruited for a focus group.
The reason for these stringent specifics is that evaluated writing is often used as data gathering to improve curriculum in the academic setting or, in the business setting, to refine hiring standards. (In many instances, too, such writing may only be identified with a code rather than the person’s name.)
Regarding the actual evaluation of the writing sample, it is much more to-the-point than the editing and grading process. The evaluator reviews the writing sample in like manner of an editor and grader but then she or he will move more quickly to determining a final assessment; a well-designed rubric will help make this possible. Because it is often a cog in a larger institutional or organizational system, the evaluating (and/or assessing) of writing is done expeditiously. Not surprising, there are emergent rumblings in the Artificial Intelligence (AI) community about software that could streamline writing sample evaluation even more.
Dr. McTyre’s blog entry is published monthly. Contact him at email@example.com