The weight of intellectual stagnation: How reliance on AI can hinder genuine learning and critical thinking in students. Image (and typos) generated by Nano Banana.
Source
The New York Times
Summary
Anastasia Berg, a philosophy professor at the University of California, Irvine, contends that even minimal reliance on AI tools threatens students’ cognitive development and linguistic competence. Drawing on her experience of widespread AI use in a moral philosophy course, Berg argues that generative AI erodes the foundational processes of reading, reasoning, and self-expression that underpin higher learning and democratic citizenship. While past technologies reshaped cognition, she claims AI uniquely undermines the human capacity for thought itself by outsourcing linguistic effort. Berg calls for renewed emphasis on tech-free learning environments to protect students’ intellectual autonomy and critical literacy.
Key Points
Over half of Berg’s students used AI to complete philosophy exams.
AI shortcuts inhibit linguistic and conceptual growth central to thinking.
Even “harmless” uses, like summarising, weaken cognitive engagement.
Cognitive decline could threaten democratic participation and self-rule.
Universities should create tech-free spaces to rebuild reading and writing skills.
The irony of a digital dilemma: Students caught using AI to cheat are now turning to the same technology to craft their apologies. Image (and typos) generated by Nano Banana.
Source
The New York Times
Summary
At the University of Illinois Urbana–Champaign, over 100 students in an introductory data science course were caught using artificial intelligence both to cheat on attendance and to generate apology emails after being discovered. Professors Karle Flanagan and Wade Fagen-Ulmschneider identified the misuse through digital tracking tools and later used the incident to discuss academic integrity with their class. The identical AI-written apologies became a viral example of AI misuse in education. While the university confirmed no disciplinary action would be taken, the case underscores the lack of clear institutional policy on AI use and the growing tension between student temptation and ethical academic practice.
Key Points
Over 100 Illinois students used AI to fake attendance and write identical apologies.
Professors exposed the incident publicly to promote lessons on academic integrity.
No formal sanctions were applied as the syllabus lacked explicit AI-use rules.
The case reflects universities’ struggle to define ethical AI boundaries.
Highlights the normalisation and risks of generative AI in student behaviour.
The rise of AI in education presents a crucial dichotomy: are we using it to truly empower students and cultivate essential skills, or are we inadvertently outsourcing those very abilities to algorithms? This image visually explores the two potential paths for AI’s integration into learning, urging a thoughtful approach to its implementation. Image (and typos) generated by Nano Banana.
Source
The Irish Times
Summary
Jean Noonan reflects on the dual role of artificial intelligence in higher education—its capacity to empower learning and its risk of eroding fundamental human skills. As AI becomes embedded in teaching, research, and assessment, universities must balance innovation with integrity. AI literacy, she argues, extends beyond technical skills to include ethics, empathy, and critical reasoning. While AI enhances accessibility and personalised learning, over-reliance may weaken originality and authorship. Noonan calls for assessment redesigns that integrate AI responsibly, enabling students to learn with AI rather than be replaced by it. Collaboration between academia, industry, and policymakers is essential to ensure education cultivates judgment, creativity, and moral awareness. Echoing Orwell’s warning in 1984, she concludes that AI should enhance, not diminish, the intellectual and linguistic richness that defines human learning.
Key Points
AI literacy must combine technical understanding with ethics, empathy, and reflection.
Universities are rapidly adopting AI but risk outsourcing creativity and independent thought.
Over-reliance on AI tools can blur authorship and weaken critical engagement.
Assessment design should promote ethical AI use and active, independent learning.
Collaboration between universities and industry can align innovation with responsible practice.
Education must ensure AI empowers rather than replaces essential human skills.
Challenging transparency: A visual argument against mandatory AI disclosure statements, set against the backdrop of legal scrutiny. Image (and typos) generated by Nano Banana.
Source
Inside Higher Ed
Summary
Julie McCown, an associate professor of English at Southern Utah University, argues that mandatory AI disclosure statements in higher education are counterproductive. Initially designed to promote transparency and responsible use, these statements have instead reinforced a culture of guilt, distrust, and surveillance. McCown contends that disclosure requirements stigmatise ethical AI use and inhibit open dialogue between students and educators. Rather than policing AI use, she advocates normalising it within learning environments, rethinking assessment design, and fostering trust. Transparency, she suggests, emerges from safety and shared experimentation, not coercion.
Key Points
Mandatory AI disclosure creates a culture of confession and distrust.
Research shows disclosure reduces perceived trustworthiness regardless of context.
Anti-AI bias drives use underground and suppresses AI literacy.
Assignments should focus on quality and integrity of writing, not AI detection.
Normalising AI through reflective practice and open discussion builds genuine transparency.
by Patrick Shields – AI PhD Researcher, Munster Technological University
Estimated reading time: 5 minutes
Small and Medium-sized Enterprises (SMEs) are increasingly looking to leverage AI, but successful adoption requires proper education and strategic integration. This image represents the crucial need for training and understanding to empower SMEs to harness AI for business growth and innovation. Image (and typos) generated by Nano Banana.
Aligning National AI Goals With Local Business Realities
As third-level institutions launch AI courses across multiple disciplines this semester, there is a unique opportunity to support an essential business cohort in this country: the small to medium-sized enterprise (SMEs). In Ireland, SMEs account for over 99% of all businesses according to the Central Statistics Office. They also happen to be struggling in AI adoption in comparison to their multinational counterparts.
Recent research has outlined how SMEs are adopting AI in a piecemeal and fragmented fashion, with just 10% possessing any AI strategy at all. Not having a strategy may indicate an absence of policy, and therein lies a significant communication issue at the heart of the AI adoption challenge. Further insights describe how four out of five business leaders believe AI is being used within their companies with little to no guardrails. This presents a significant challenge to Ireland’s National AI Strategy, which was originally published in 2021 but has since been updated to include several initiatives, such as the objective of establishing an AI awareness campaign for SMEs. The Government recognises that to achieve the original goal of 75% of all businesses embracing AI by 2030, and all the investment that this will encourage, it will be essential to address the gap between SMEs and their multinational counterparts. Perhaps these endeavours can be supported at third-level, especially given the percentage of businesses that fall into the SME bracket and the demand for upskilling.
Turning AI Potential Into Practical Know-How
Having spent the summer months of 2025 meeting businesses as part of a Chamber of Commerce AI mentoring initiative in the South East of Ireland, I believe there is a significant education gap here in which third-level institutions could assist in meeting. It became clear that the business representatives that I spoke to had serious questions about how to properly commence and embrace their AI journeys. For them, it wasn’t about the technical element because many of their existing programs and applications were adding AI features and introducing new AI-enabled tools which they could easily access. The prominent issue was in managing the process of deploying the technology in a way that matched employee buy-in with integrated, sustained & appropriate usage for maximum benefit. They require frameworks and education to roll this out effectively.
A real-world story:
As I returned to my AI adoption PhD studies this Autumn, I had the pleasure of meeting a part-time student who was employed by a local tech company in Cork. He wished to share the story of an AI initiative his employer had embarked upon, which left him feeling anxious. The company had rolled out an AI meeting transcription tool to the surprise of its employees. There had been no prior communication about its deployment, and the tool was now in active use inside the organisation. This particular student felt that the AI was useful but had its limitations, such as, for example, not being able to identify meeting speakers on the AI-generated meeting transcripts. He had his doubts as to whether the tool would stay in use at his workplace in the future, and he had not received any policy documents related to its correct handling. He also stated that he was not aware if the organisation had an AI strategy, and the manner in which the technology had been integrated into daily operations had left him and his colleagues feeling quite uneasy. He felt that the team would have benefited enormously from some communication before and during the rollout. This same student was looking to commence a course in effective AI adoption and proclaimed his belief that the industry was crying out for more training and development in this area.
The above tale of potentially failing deployment is unfortunately not an isolated case. Reports in the US have shown that up to 95% of AI pilots are failing before they ever make it to full production inside organisations. There may be many complex reasons for this, but one must certainly be the lack of understanding of the cultural impact of such change on teams, compounded by many examples of inadequate communication. It appears to me that despite the global investment in technology and the genuine intention to embrace AI, organisations continue to struggle with the employee education aspect of this transformation. If employers will prioritise training and development in partnership with education providers, they may dramatically increase their chances of success. This may include the establishment of joint frameworks for AI deployment and management with educational courses aligned to emerging business needs.
In adopting a people development approach, companies may not only improve the chances of AI pilot success but will foster trust, alignment and buy-in. Surely this is the real promise of AI, a better, brighter organisational future, starting this winter, where your greatest asset – your people, are not left completely out in the cold and supported by higher education.
AI PhD Researcher Munster Technological University
I am a PhD Researcher at Munster Technological University, researching how small and medium-sized businesses adopt Artificial Intelligence with a particular focus on the human, strategic and organisational dynamics involved. My work looks beyond the technical layer, exploring how AI can be introduced in practical, low-friction ways that support real business outcomes.