Embrace AI or Go Analog? Harvard Faculty Adapt to a New Normal


In a grand, traditional library setting, a female faculty member gestures towards a large glowing screen displaying AI data on the left, while a male faculty member examines a physical book with a magnifying glass on the right. Between them, a hovering question mark split in blue and orange signifies the choice between AI and traditional methods. The scene represents academics adapting to new technologies. Generated by Nano Banana.
As artificial intelligence becomes increasingly integrated into education, faculty in esteemed institutions face a pivotal choice: embrace the analytical power of AI or champion traditional analogue learning. This image captures the dynamic tension and adaptation required as educators navigate this new normal. Image generated by Nano Banana.

Source

The Harvard Crimson

Summary

AI is now a widespread presence in Harvard classrooms, and faculty are increasingly accepting it as part of teaching rather than trying to ignore it. Around 80% of surveyed faculty reported seeing coursework they believed was AI-generated. Yet most aren’t confident in spotting it. In response, different pedagogical strategies are emerging: some instructors encourage responsible AI use (e.g. tutor chatbots, AI homework), others “AI-proof” their classes via in-person exams. Harvard’s Bok Center is providing support with AI-specific tools and workshops. While concerns persist (cheating, undermined learning), many believe that adjusting to AI and preparing students for its reality is the more sustainable path.

Key Points

  • Nearly 80% of Harvard faculty have seen student work they believe used AI.
  • Only ~14% of faculty feel very confident distinguishing AI-generated content.
  • Faculty responses vary: some embrace AI (homework/assistant tools), others shift to in-person exams to reduce risks.
  • The Bok Center helps instructors design AI-resilient assignments, tutor chatbots, and offers pedagogical support.
  • Some faculty worry that AI use might degrade deep learning, but many accept that AI is here to stay and practices must evolve.

Keywords

URL

https://www.thecrimson.com/article/2025/9/19/AI-Shapes-Classroom-Embrace/

Summary generated by ChatGPT 5


AI has turned college exams into a ‘wicked problem’ with no obvious fix, researchers warn


In a vast, traditional exam hall filled with students taking tests on laptops, three concerned researchers stand in the foreground, looking up at a large, glowing holographic display. The display shows complex data and a central neon pink circle stating 'THE AI EXAM PROBLEM - NO OBVIOUS FIX'. The image highlights the profound challenge AI poses to the integrity of college examinations. Generated by Nano Banana.
The integration of AI has transformed college exams into a complex ‘wicked problem’ with no easy solutions, as educational researchers increasingly warn. This image visualises the dilemma, where traditional assessment methods clash with advanced AI tools, creating a challenging landscape for academic integrity and fair evaluation. Image generated by Nano Banana.

Source

Business Insider

Summary

A recent study from Deakin University warns that AI’s impact on university exams represents a “wicked problem” — one without a single clean solution. Educators are struggling to balance academic integrity, creative assessment, and workload. Some are experimenting with mixed approaches (e.g. AI-free vs AI-permitted assignments, oral exams, personalized prompts), but each introduces trade-offs: logistics, fairness, and staff burden. The study argues universities should abandon chasing one-size-fits-all fixes and instead adopt flexible, context-sensitive, iterative strategies that acknowledge complexity rather than resolve it once and for all.

Key Points

  • AI has made exam design and assessment much harder; there is no consensus among faculty about how to respond.
  • Trade-offs are everywhere: authenticity vs manageability, creativity vs scalability.
  • Some teachers offer AI-free and AI-allowed tasks; others try oral exams or reflective work—but all find challenges.
  • Assessment methods are being revised to make misuse harder, but costs (time, logistics, ethics) are substantial.
  • The “wicked problem” framing suggests there’s no perfect fix; success may lie in local experimentation and flexible policy rather than universal rules.

Keywords

URL

https://www.businessinsider.com/ai-college-exams-wicked-problem-no-clear-fix-researchers-warn-2025-9

Summary generated by ChatGPT 5


Generative AI in Higher Education Teaching and Learning: Sectoral Perspectives


Source

Higher Education Authority

Summary

This report, commissioned by the Higher Education Authority (HEA), captures sector-wide perspectives on the impact of generative AI across Irish higher education. Through ten thematic focus groups and a leadership summit, it gathered insights from academic staff, students, support personnel, and leaders. The findings show that AI is already reshaping teaching, learning, assessment, and governance, but institutional responses remain fragmented and uneven. Participants emphasised the urgent need for national coordination, values-led policies, and structured capacity-building for both staff and students.

Key cross-cutting concerns included threats to academic integrity, the fragility of current assessment practices, risks of skill erosion, and unequal access. At the same time, stakeholders recognised opportunities for AI to enhance teaching, personalise learning, support inclusion, and free staff time for higher-value educational work. A consistent theme was that AI should not be treated merely as a technical disruption but as a pedagogical and ethical challenge that requires re-examining educational purpose.

Key Points

  • Sectoral responses to AI are fragmented; coordinated national guidance is urgently needed.
  • Generative AI challenges core values of authorship, originality, and academic integrity.
  • Assessment redesign is necessary—moving towards authentic, process-focused approaches.
  • Risks include skill erosion in writing, reasoning, and information literacy if AI is overused.
  • AI literacy for staff and students must go beyond tool use to include ethics and critical thinking.
  • Ethical use of AI requires shared principles, not just compliance or detection measures.
  • Inclusion is not automatic: without deliberate design, AI risks deepening inequality.
  • Staff feel underprepared and need professional development and institutional support.
  • Infrastructure challenges extend beyond tools to governance, procurement, and policy.
  • Leadership must shape educational vision, not just manage risk or compliance.

Conclusion

Generative AI is already embedded in higher education, raising urgent questions of purpose, integrity, and equity. The consultation shows both enthusiasm and unease, but above all a readiness to engage. The report concludes that a coordinated, values-led, and inclusive approach—balancing innovation with responsibility—will be essential to ensure AI strengthens, rather than undermines, Ireland’s higher education mission.

Keywords

URL

https://hea.ie/2025/09/17/generative-ai-in-higher-education-teaching-and-learning-sectoral-perspectives/

Summary generated by ChatGPT 5


Are students really that keen on generative AI?


In a collaborative workspace, a male student holds up a tablet displaying generative AI concepts, including a robotic arm, while a question mark hovers above. Another male student gestures enthusiastically, while two female students at laptops show skeptical or thoughtful expressions. A whiteboard covered with notes and diagrams is in the background. The scene depicts students with mixed reactions to generative AI. Generated by Nano Banana.
As generative AI tools become more prevalent, the student response is far from monolithic. This image captures the varied reactions—from eager adoption to thoughtful skepticism—as students grapple with the benefits and implications of integrating these powerful technologies into their academic and creative processes. Are they truly keen, or cautiously optimistic? Image generated by Nano Banana.

Source

Wonkhe

Summary

A YouGov survey of 1,027 students shows strong disapproval of using generative AI for assessed work: 93% say creating work using AI is unacceptable, 82% extend that to using parts of it. While many students have used AI study tools (summarising, finding sources, etc.), nearly half report encountering false or “hallucinated” content from those tools. Most believe their university’s stance on AI is too lenient rather than overly strict, and many expect that academic staff could detect misuse. There are benefits reported—some students think their grades and learning outcomes improved—but overall confidence in AI’s reliability and appropriateness remains low.

Key Points

  • 93% of students believe work created via generative AI for assessment is unacceptable; 82% say even partial use is unacceptable.
  • Around 47% of students who use AI study tools see hallucinations or false information in the AI’s output.
  • 66% believe it likely their university would detect AI-generated work used improperly.
  • Many students report learning and grades that are “slightly more or about the same” when using AI tools.
  • Opinion among students: many are not particularly motivated to use AI for cheating; more often they use it in low-stakes or supportive ways.

Keywords

URL

https://wonkhe.com/wonk-corner/are-students-really-that-keen-on-generative-ai/

Summary generated by ChatGPT 5


How to use ChatGPT at university without cheating: ‘Now it’s more like a study partner’


Three university students (two male, one female) are seated at a table with laptops and books, smiling and engaged in discussion. Behind them, a large transparent screen displays a glowing blue humanoid AI figure pointing to various academic data and charts. The setting is a modern library, conveying a collaborative study environment where AI acts as a helpful, non-cheating resource. Generated by Nano Banana.
Moving beyond fears of academic dishonesty, many students are now leveraging ChatGPT as an ethical ‘study partner’ to enhance their learning experience at university. This image illustrates a collaborative approach where AI supports understanding and exploration, rather than providing shortcuts, thereby fostering a new era of academic assistance. Image generated by Nano Banana.

Source

The Guardian

Summary

Many students now treat ChatGPT less like a cheating shortcut and more like a study partner: for grammar checks, revision, practice questions, and organising notes. Usage jumped from 66% to 92% in a year. Universities are clarifying rules: AI can support study but not generate assignment content. Educators stress AI literacy, awareness of risks (hallucinations, fake references), and critical thinking to ensure AI complements rather than replaces learning.

Key Points

  • Student AI use rose from ~66% to ~92% in a year; viewed more as a partner than a cheat tool.
  • Valid uses: organising notes, summarising, and generating practice questions.
  • Risks: overreliance, hallucinations, using AI to write assignments still banned.
  • Some universities track AI usage or require logs; policies clearer.
  • Message: AI should be supplemental, not a substitute; build literacy and critical skills.

Keywords

URL

https://www.theguardian.com/education/2025/sep/14/how-to-use-chatgpt-at-university-without-cheating-now-its-more-like-a-study-partner

Summary generated by ChatGPT 5