- Dhruv
- Jan 7
- 2 min read
Objective
Develop and deploy a functional CornellGPT prototype to address repetitive university inquiries by leveraging AI trained on institutional data, significantly reducing support staff workload and enhancing the student experience. This initial phase focuses on delivering a streamlined, reliable solution with a basic interface while validating its impact on operational efficiency and user engagement.
Core Features
Data Integration:
Train the AI exclusively on Cornell’s official data sources, such as:
Academic calendars
Admissions FAQs
Financial aid guidelines
Course registration policies
Housing details
Focus on high-priority, frequently queried topics to ensure maximum utility.
Answering Functionality:
Enable a user interface resembling a basic chatbot where students can input questions and receive immediate, context-specific responses.
Interface:
A simple and intuitive text-based interface hosted on CornellGPT.com.
The CornellGPT app is not part of the MVP scope and will be considered as part of future iterations.
The platform will be web-based, accessible via browsers, ensuring broad compatibility and ease of access for Cornell students and staff.
Technical Considerations
Natural Language Processing (NLP):
Utilize existing, scalable NLP frameworks tailored to the structured data provided by Cornell.
System Infrastructure:
Hosted on secure cloud infrastructure compliant with FERPA and institutional data security policies.
Ensure high availability and performance during peak usage times (e.g., registration periods).
Data Privacy and Compliance:
Strictly exclude sensitive data during training.
Implement opt-in policies for tracking usage metrics and ensure compliance with all relevant privacy regulations.
Deliverables
CornellGPT Prototype:
A working AI-powered chatbot capable of responding accurately to repetitive student queries based on the provided data.
Deployed within Cornell’s environment for controlled testing.
Data Coverage:
Ensure training data addresses at least 80% of current repetitive queries identified by Cornell’s support staff.
Basic Analytics (least priority):
Include a dashboard summarizing:
Query volume and resolution rates
Most frequently asked questions
Areas requiring additional data coverage
Success Metrics
User Engagement:
Target adoption by at least 50% of Cornell’s student body during the pilot phase.
Achieve a user satisfaction score of >85% based on survey feedback.
Operational Impact:
Demonstrate a minimum 40% reduction in repetitive queries handled by Cornell’s support staff within the first month.
Model Performance:
Maintain >90% accuracy in query resolution during the pilot.
Validation Plan
Internal Testing:
Conduct alpha testing with selected support staff and student volunteers.
Iterate based on performance feedback to fine-tune the model.
Pilot Launch:
Release CornellGPT to a larger student cohort for real-world use.
Measure the reduction in staff workload and student engagement metrics over a 3-month period.
Post-Pilot Evaluation:
Compile pilot findings to refine the solution and prepare for scaling to other institutions.