Teaching in the age of AI: how flawless, but empty, paperwork can beAuthor: Elisa Cristea
Besides my lawyer job at Suciu Partners, I also strive as a teaching assistant in public international law and climate change. In this latter capacity, I’ve had my fair share of experiences grading student papers. Over the last weeks, I’ve noticed a trend: papers that are suspiciously well-structured, overly polished, yet lacking in depth.
After running them through AI detection tools, I often see percentages of 90-100% AI-generated content. In many cases, I don’t even need detection tools to recognize AI-generated work; certain patterns, such as overly rigid sentence structures, exaggerated formality, excessive reliance on generic phrasing, or even inconsistent formatting - like using all caps in titles or unnatural bullet structures - are clear indicators. At first glance, these papers look flawless. The grammar is impeccable, the arguments are structured logically, and citations are often present (though sometimes fabricated). But as someone who actively uses AI tools myself, I can tell when a paper lacks the "human touch". AI-generated essays tend to be overly generic, avoid deep engagement with the subject, and miss the nuanced critical thinking that comes from real intellectual effort.
It is in the face of this reality that I say:AI is not an enemy, but also cannot be overly taken advantage of.
When discussing with students, I openly tell them: I use AI too. I rely on it for phrasing ideas more concisely, checking typos and grammar issues, as well as generating new perspectives and brainstorming various ideas. But here’s the key difference: I don’t let AI do all the work. I put my mind into it, challenge the ideas, and refine them. AI can assist, but it should never replace critical thinking, originality, and depth of analysis.
The problem is not that students use AI - the problem is how they use it. When AI is used as a shortcut rather than a starting point, education suffers. We end up with students who submit flawless, yet empty, work - work that lacks the struggle, curiosity, and personal insights that real learning requires.
In this state of fact, I conducted two exercises with one of my classes to illustrate both the strengths and limitations of AI-generated content.
First, I asked students to request the same information from ChatGPT but using different prompts. The results were eye-opening - students quickly realized that even slight variations in wording led to significantly different responses, highlighting how AI interprets and structures information based on input. The second exercise involved asking ChatGPT a specific question and then cross-checking its response with reliable sources. This exercise sparked even more curiosity, as students saw firsthand that while AI can be a powerful tool, it is not infallible - it can generate errors, provide outdated information, or even fabricate details. Both exercises helped them understand that AI is useful but must be used critically, reinforcing the need for analytical thinking and fact-checking rather than blind reliance on automated responses.
This is why I really think we need to shift our approach by encouraging AI as a tool for assistance rather than a replacement tool that can do our work for us, ensuring that students engage with the material beyond what AI can idly generate. Focusing on oral discussions, debates, and interactive learning can help develop critical thinking and analytical skills that AI cannot and should not replicate. Additionally, requiring students to submit drafts and show their thought processes, rather than just polished final papers, fosters deeper engagement with the subject. Lastly, teaching students to fact-check and critically assess AI-generated content is essential in ensuring they do not passively accept information, but instead refine and challenge it through their own reasoning.
Students who know how to use AI intelligently will have an advantage in the job market. The ability to filter, refine, and enhance AI outputs is a skill that will define the next generation of professionals. But those who rely on AI without critical input will struggle in a world where innovation and personal insight matter. I’ve heard of teachers, as well as professionals in various fields, who fear AI and even take pride in not using it at all, as if rejecting technology is a badge of honor. In my view, this approach is both outdated and misguided. Ignoring AI is like refusing to use a calculator in mathematics or rejecting computers in research - it’s resisting progress for the sake of tradition. Instead of fearing AI, we should focus on understanding how to integrate it effectively, leveraging its strengths while maintaining human judgment, creativity, and critical thinking. Pretending AI doesn’t exist or refusing to engage with it is not only unrealistic but also a disadvantage in a world where technology is shaping every industry.
The same principles apply to us professionals, whether we are drafting legal documents, preparing reports, or creating various presentations or summaries for clients. Ignoring AI out of fear or pride doesn’t make our work more valuable - it only makes it less efficient. AI can streamline research, refine language, and enhance the clarity of our documents, but it should never replace our judgment, expertise, or strategic thinking. Just as students must learn to use AI responsibly in their academic work, we, as professionals, must embrace it as a tool that enhances our productivity rather than diminishing the quality of our work. Rejecting AI entirely is not a mark of professionalism; on the contrary, knowing how to use it wisely is what will set us apart in an increasingly digital world.

Comments
Post a Comment