Project
Assessment Reports

Client
English Literacy Platform
RoleInstructional Designer

Overview
This project was for the development of a digital reporting system. Before and after the online literacy course, students compose paragraphs in response to a prompt. Their performance is evaluated on a 0-20 point scale. Personalized PDF reports aim to provide meaningful, individualized feedback to students.

Year2022-2024



















REQUIREMENTS

Our goal was to develop report templates that were:

  1. Visually engaging
  2. Easy to understand
  3. Customizable to student performance levels

The critical challenge was designing the reports, as well as the system to generate them, with limited budget and development support.



RESEARCH

After careful research of report creation and data visualization software, I discovered Canva's "bulk create" feature. Using an existing tool allowed me to keep the interface motivational and user-friendly, flexibly display different skill levels, and generate hundreds of reports instantaneously, all without spending extra funds.

I also researched visual references for the reports. Our team was drawn to those issued by the Smarter Balanced Assessment Consortium.
















DESIGN
Using Canva, I designed pre- and post-assessment reports that incorporated key components specified by the Product Lead:

  • CEFR scale graphic + tick mark to indicate the student's score
  • Student's writing sample
  • Personalized feedback based on score
  • Pre-assessment: Individual student goal
  • Post-assessment: Goal achievement status
  • Colorful design elements

While Canva offered design flexibility, its bulk create feature presented several workflow challenges:

  • Limited to 15 unique variable fields
  • Required text imports, as visual input fields were unreliable
  • Necessitated manual proofing of each report to prevent formatting errors
  • Generated a single PDF that required manual separation, renaming, and distribution

These constraints made a large impact on the design, as well as on the commentary for each score. Report copy was written in collaboration with the content team.






Piloted in Latin America and India, Version 1 received mostly positive feedback. Students appreciated the personalized insights, writing samples, and improvement suggestions. The design was visually engaging and tailored to each student. However, feedback highlighted three issues:

  • Scores near zero felt discouraging.
  • Students wanted to know how they did compared to their peers. 
  • Teachers in India alerted to students' unfamiliarity with the CEFR.
  • Two scoring categories were flagged as unreliable.

Additionally, our team felt that the feedback could be customized further to reflect the students' individual strengths and weaknesses. At this time, we began research to refine our scoring scale and reporting approach.
Version 2 introduced major updates based on internal and external feedback. It replaced two scoring categories, revised the scoring scale, added subscores, and deemphasized CEFR in favor of an in-house standard. Tick marks were removed from graphics. Visual elements were modified to appeal to a broader audience, including teachers and principals. 

Piloted in India, this iteration received mixed feedback. Students valued the subscores for highlighting strengths and weaknesses. However, the new categories (Approaching, Near, At, Above Standard) were confusing to some students, and some felt that expectations could be clearer.

Additionally, the Pen team saw opportunities to simplify the design and further personalize the goal-setting part of the assessment process. 

To address student feedback, we developed a Version 3 prototype using "levels" instead of standard categories. The concept of "levels" mirrors video game progression, making it more intuitive for younger students. This framework also normalizes starting at a low level and improving over time.

This version of the reports included:
  • An indication of the class average to help students situate their performance in relation to their peers.
  • Link to a supplementary Scoring Scale Translation document that compares Pen levels to international benchmarks.
  • A goal range rather than a fixed target.
  • Link to a video in which the instructor guides students to set a realistic goal for themselves.






RESULTSDuring implementations, I organized assessment scores for each school and generated individual student reports using CSV data. With the help of LLMs and online tutorials, I wrote Python scripts to separate and rename reports by students' UUIDs. By September of 2024, I delivered over 1,500 individualized student reports for active schools. 

Through the iterative development of these reports, I learned the importance of balancing design components with scalability. Each version revealed challenges but was instrumental in teaching us how our students understand their progress and set goals. Streamlining the production process with tools like Canva and Python scripts was a first-hand look into how automation can scale efforts while maintaining quality.