Key objectives for the Automated Scoring System were as follows:
- Develop ML models capable of precise multi-category scoring
- Create a flexible system for processing and scoring writing samples
- Design an output mechanism that could generate clear, actionable assessment data
- Provide a scalable solution that could potentially handle tens of thousands of student assessments
My goal was to bridge the gap between technical complexity and practical application, ensuring the PRD would be comprehensible and actionable for both the content and engineering teams. This approach allowed me to craft a document that was technically rigorous yet accessible, translating complex ML concepts into clear, strategic requirements that could guide potential system implementation.
- Accepting inputs as CSV files containing student writing samples
- Leveraging advanced algorithms and AI/plagiarism detection to identify and tag exempt responses with exemption codes
- Evaluating responses across five categories using NLP-based models
- Outputting two CSV files: 1) Exempt responses with their exemption codes. 2) Scoreable responses with raw scores across five categories.
Though this system wasn’t implemented, planning it gave me invaluable insights into AI integration and NLP applications in education. It also offered a look into the complexities of creating scalable, future-proof solutions with AI-driven systems. This experience fueled my enthusiasm for exploring NLP and AI technologies further in my career.