Diverse strategies in action: English professors are developing unique and personalised methods to encourage original thought and deter the misuse of AI in their classrooms. Image (and typos) generated by Nano Banana.
Source
Yale Daily News
Summary
Without a unified departmental policy, Yale University’s English professors are independently addressing the challenge of generative AI in student writing. While all interviewed faculty agree that AI undermines critical thinking and originality, their responses vary from outright bans to guided experimentation. Professors Stefanie Markovits and David Bromwich warn that AI shortcuts obstruct the process of learning to think and write independently, while Rasheed Tazudeen enforces a no-tech classroom to preserve student engagement. Playwriting professor Deborah Margolin insists that AI cannot replicate authentic human voice and creativity. Across approaches, faculty emphasise trust, creativity, and the irreplaceable role of struggle in developing genuine thought.
Key Points
Yale English Department lacks a central AI policy, favouring academic freedom.
Faculty agree AI use hinders original thinking and creative voice.
Some, like Tazudeen, impose no-tech classrooms to deter reliance on AI.
Others allow limited exploration under clear guidelines and reflection.
Consensus: authentic learning requires human engagement and intellectual struggle.
Challenging transparency: A visual argument against mandatory AI disclosure statements, set against the backdrop of legal scrutiny. Image (and typos) generated by Nano Banana.
Source
Inside Higher Ed
Summary
Julie McCown, an associate professor of English at Southern Utah University, argues that mandatory AI disclosure statements in higher education are counterproductive. Initially designed to promote transparency and responsible use, these statements have instead reinforced a culture of guilt, distrust, and surveillance. McCown contends that disclosure requirements stigmatise ethical AI use and inhibit open dialogue between students and educators. Rather than policing AI use, she advocates normalising it within learning environments, rethinking assessment design, and fostering trust. Transparency, she suggests, emerges from safety and shared experimentation, not coercion.
Key Points
Mandatory AI disclosure creates a culture of confession and distrust.
Research shows disclosure reduces perceived trustworthiness regardless of context.
Anti-AI bias drives use underground and suppresses AI literacy.
Assignments should focus on quality and integrity of writing, not AI detection.
Normalising AI through reflective practice and open discussion builds genuine transparency.
by Patrick Shields – AI PhD Researcher, Munster Technological University
Estimated reading time: 5 minutes
Small and Medium-sized Enterprises (SMEs) are increasingly looking to leverage AI, but successful adoption requires proper education and strategic integration. This image represents the crucial need for training and understanding to empower SMEs to harness AI for business growth and innovation. Image (and typos) generated by Nano Banana.
Aligning National AI Goals With Local Business Realities
As third-level institutions launch AI courses across multiple disciplines this semester, there is a unique opportunity to support an essential business cohort in this country: the small to medium-sized enterprise (SMEs). In Ireland, SMEs account for over 99% of all businesses according to the Central Statistics Office. They also happen to be struggling in AI adoption in comparison to their multinational counterparts.
Recent research has outlined how SMEs are adopting AI in a piecemeal and fragmented fashion, with just 10% possessing any AI strategy at all. Not having a strategy may indicate an absence of policy, and therein lies a significant communication issue at the heart of the AI adoption challenge. Further insights describe how four out of five business leaders believe AI is being used within their companies with little to no guardrails. This presents a significant challenge to Ireland’s National AI Strategy, which was originally published in 2021 but has since been updated to include several initiatives, such as the objective of establishing an AI awareness campaign for SMEs. The Government recognises that to achieve the original goal of 75% of all businesses embracing AI by 2030, and all the investment that this will encourage, it will be essential to address the gap between SMEs and their multinational counterparts. Perhaps these endeavours can be supported at third-level, especially given the percentage of businesses that fall into the SME bracket and the demand for upskilling.
Turning AI Potential Into Practical Know-How
Having spent the summer months of 2025 meeting businesses as part of a Chamber of Commerce AI mentoring initiative in the South East of Ireland, I believe there is a significant education gap here in which third-level institutions could assist in meeting. It became clear that the business representatives that I spoke to had serious questions about how to properly commence and embrace their AI journeys. For them, it wasn’t about the technical element because many of their existing programs and applications were adding AI features and introducing new AI-enabled tools which they could easily access. The prominent issue was in managing the process of deploying the technology in a way that matched employee buy-in with integrated, sustained & appropriate usage for maximum benefit. They require frameworks and education to roll this out effectively.
A real-world story:
As I returned to my AI adoption PhD studies this Autumn, I had the pleasure of meeting a part-time student who was employed by a local tech company in Cork. He wished to share the story of an AI initiative his employer had embarked upon, which left him feeling anxious. The company had rolled out an AI meeting transcription tool to the surprise of its employees. There had been no prior communication about its deployment, and the tool was now in active use inside the organisation. This particular student felt that the AI was useful but had its limitations, such as, for example, not being able to identify meeting speakers on the AI-generated meeting transcripts. He had his doubts as to whether the tool would stay in use at his workplace in the future, and he had not received any policy documents related to its correct handling. He also stated that he was not aware if the organisation had an AI strategy, and the manner in which the technology had been integrated into daily operations had left him and his colleagues feeling quite uneasy. He felt that the team would have benefited enormously from some communication before and during the rollout. This same student was looking to commence a course in effective AI adoption and proclaimed his belief that the industry was crying out for more training and development in this area.
The above tale of potentially failing deployment is unfortunately not an isolated case. Reports in the US have shown that up to 95% of AI pilots are failing before they ever make it to full production inside organisations. There may be many complex reasons for this, but one must certainly be the lack of understanding of the cultural impact of such change on teams, compounded by many examples of inadequate communication. It appears to me that despite the global investment in technology and the genuine intention to embrace AI, organisations continue to struggle with the employee education aspect of this transformation. If employers will prioritise training and development in partnership with education providers, they may dramatically increase their chances of success. This may include the establishment of joint frameworks for AI deployment and management with educational courses aligned to emerging business needs.
In adopting a people development approach, companies may not only improve the chances of AI pilot success but will foster trust, alignment and buy-in. Surely this is the real promise of AI, a better, brighter organisational future, starting this winter, where your greatest asset – your people, are not left completely out in the cold and supported by higher education.
AI PhD Researcher Munster Technological University
I am a PhD Researcher at Munster Technological University, researching how small and medium-sized businesses adopt Artificial Intelligence with a particular focus on the human, strategic and organisational dynamics involved. My work looks beyond the technical layer, exploring how AI can be introduced in practical, low-friction ways that support real business outcomes.
In a world where AI is ubiquitous, some educators are embracing its presence in the classroom. This image captures the perspective of a teacher who views AI not as a threat, but as an integral tool that can foster creativity, innovation, and critical thinking, challenging traditional views on technology in education. Image (and typos) generated by Nano Banana.
Source
The Atlantic
Summary
John McWhorter, a linguist and professor at Columbia University, argues that fears about artificial intelligence destroying academic integrity are exaggerated. He contends that educators should adapt rather than resist, acknowledging that AI has become part of how students read, write, and think. While traditional essay writing once served as a key training ground for argumentation, AI now performs that function efficiently, prompting teachers to develop more relevant forms of assessment. McWhorter urges educators to replace formulaic essays with classroom discussions, personal reflections, and creative applications that AI cannot replicate. Grammar and stylistic rules, he suggests, should no longer dominate education; instead, AI can handle mechanical precision, freeing students to focus on reasoning and ideas. For McWhorter, the goal is not to preserve outdated academic rituals but to help students learn to think more deeply in a changed world.
Key Points
The author challenges alarmist narratives about AI eroding higher education.
AI has replaced traditional essay writing as a mechanical exercise but not genuine thought.
Teachers should create assessments that require personal insight and classroom engagement.
Grammar and stylistic conventions are becoming obsolete as AI handles technical writing.
AI allows students to focus on creativity, reasoning, and synthesis rather than busywork.
The shift mirrors earlier transitions in media—from print to digital—without diminishing intellect.
A recent report highlights the critical need for increased AI literacy across higher education institutions. As technology rapidly advances, universities face an urgent challenge to equip students with the essential knowledge and digital skills required for an AI-driven future. Image (and typos) generated by Nano Banana.
Source
Research Professional News
Summary
A new report from the Higher Education Policy Institute warns that British universities must urgently improve AI literacy among both staff and students to stay relevant and equitable in an era of rapid digital transformation. Co-authored by Professor Wendy Hall and Giles Carden of the University of Southampton, the report argues that universities can no longer afford to simply “acknowledge AI’s presence” and must adopt structured strategies for skills development, teaching innovation, and research support. It highlights growing digital divides across gender, income, and subject disciplines. Contributions include a chapter written by ChatGPT itself, advocating AI training within doctoral and staff development programmes, and cautioning against uneven capability across institutions. The report also predicts that AI adoption could lead to job reductions in professional services as universities seek financial efficiencies.
Key Points
The Higher Education Policy Institute calls for systemic AI literacy across the UK university sector.
Experts stress active engagement and structured upskilling, not passive awareness.
Digital divides linked to gender, wealth, and discipline risk deepening inequality.
ChatGPT’s own chapter recommends integrating AI training into research and doctoral curricula.
Financial pressures may drive automation and staff cuts in professional services.