Generative artificial intelligence tools such as ChatGPT have reshaped how students learn. It can help break down difficult material, summarize readings and provide quick feedback when a student is stuck.
But as helpful as these tools are, they can also be harmful if used the wrong way. Misusing AI for cheating in class and completing assignments can lead to serious consequences ranging from receiving a failing grade to suspension.
In summer 2024, the CUNY Board of Trustees approved policies stating that using generative AI to complete assignments or exams without an instructor’s permission, including uploading class work or exam questions to outside websites or AI tools, is considered cheating. Copying or paraphrasing AI content without proper citation also violates academic integrity.
If AI is used to generate any part of a student’s work, it needs to be credited just as it’s done when using any other source. Submitting AI-paraphrased material without disclosing that it came from an AI tool is plagiarism under CUNY’s rules.
But the rules feel broad and optional. Some professors allow AI for editing or idea development while others ban it. Still, many courses don’t clearly state what’s allowed, leaving students unsure.
AI editing tools like Grammarly may seem to not be included under the stipulations of the new policy. However, their use depends on the course instructor. Some professors permit it for polishing work, while others expect all editing to be done by students.
CUNY could solve this discrepancy by creating clear and consistent AI guidelines. Each department could set clear expectations that match the nature of its courses.
For example, a computer science course might let students use AI to debug small code errors, while a writing or business class could ban AI-generated drafts. Department-level guidelines could help set fair and realistic boundaries. Professors could also list in the syllabus whether it’s okay to use AI to check grammar, brainstorm ideas or simplify complex readings. It should also be stated where the line is drawn. For example, whether writing essays or completing graded problems permits the use of AI. Students should expect to do their own work and if they use AI for support, they must be transparent about it.
If AI is used responsibly, it could make learning better. It can help students understand difficult concepts, stay organized and get early feedback on a draft before turning it in to a professor.
However, students could take advantage of more relaxed policies and use AI to gain an unfair advantage, eroding the trust of their professors.
CUNY must prepare students for a future where AI is everywhere while keeping education fair. That means clear department rules, professors explaining what’s allowed and students using AI to learn, not to replace their own critical thinking.
AI isn’t the core problem, but regulation and monitoring will be necessary. When used with clear rules and honesty, it can make learning stronger and more efficient.