Today’s AI hype has echoes of a devastating technology boom and bust 100 years ago


A split image contrasting two eras. On the left, a sepia-toned scene from 100 years ago shows a crowd enthusiastically gathered around industrial machinery and towering power lines, with a banner proclaiming "THE FUTURE OF EVERYTHING!" On the right, a vibrant, futuristic cityscape glows under a digital sky, where a diverse crowd looks up at a holographic brain symbol and text announcing "AI REVOLUTION! UNLIMITED POTENTIAL!". In the foreground, people interact with digital news showing "AI CRASHES" and "TECH LAYOFFS." Image (and typos) generated by Nano Banana.
The intense fervor surrounding today’s AI technology mirrors the intense hype and subsequent devastating bust of a technological revolution a century ago. This side-by-side comparison starkly portrays the recurring cycle of technological innovation and speculation, prompting a cautionary reflection on whether the current AI gold rush could face a similar fate to past booms and busts. Image (and typos) generated by Nano Banana.

Source

The Conversation

Summary

Cameron Shackell draws parallels between today’s AI boom and the electrification craze of the 1920s. Just as electricity fuelled massive innovation, speculation and eventual collapse, AI is showing similar patterns of overinvestment, market concentration and loose regulation. The 1929 stock market crash revealed the dangers of unregulated “high-tech” exuberance, leading to reforms that transformed electricity into stable infrastructure. Shackell warns that AI could follow the same path—booming unsustainably before a painful correction—unless governments implement thoughtful regulation. The question, he suggests, is whether we can integrate AI safely into daily life before a comparable bust forces reform.

Key Points

  • The 1920s electricity boom mirrors today’s AI surge in hype and speculation.
  • Both technologies reshaped industries and drove market concentration.
  • Lack of oversight in the 1920s helped trigger the Great Depression.
  • AI’s rapid expansion faces similarly weak global regulation.
  • The author urges proactive governance to avoid another tech-driven collapse.

Keywords

URL

https://theconversation.com/todays-ai-hype-has-echoes-of-a-devastating-technology-boom-and-bust-100-years-ago-265492

Summary generated by ChatGPT 5


From Detection to Development: How Universities Are Ethically Embedding AI for Learning


In a large, modern university hall bustling with students and professionals, a prominent holographic display presents a clear transition. The left panel, "DETECTION ERA," shows crossed-out symbols for AI detection, indicating a past focus. The right panel, "AI FOR LEARNING & ETHICS," features a glowing brain icon within a shield, representing an "AI INTEGRITY FRAMEWORK" and various applications like personalized learning and collaborative spaces, illustrating a shift towards ethical AI development. Image (and typos) generated by Nano Banana.
Universities are evolving their approach to artificial intelligence, moving beyond simply detecting AI-generated content to actively and ethically embedding AI as a tool for enhanced learning and development. This image visually outlines this critical shift, showcasing how institutions are now focusing on integrating AI within a robust ethical framework to foster personalised learning, collaborative environments, and innovative educational practices. Image (and typos) generated by Nano Banana.

Source

HEPI

Summary

Rather than focusing on detection and policing, this blog argues universities should shift toward ethically embedding AI as a pedagogical tool. Based on research commissioned by Studiosity, evidence shows that when AI is used responsibly, it correlates with improved outcomes and retention—especially for non-traditional students. The blog presents a “conduit” metaphor: AI is like an overhead projector—helpful, but not replacing core learning. A panel at the Universities UK Annual Conference proposed values and guardrails (integrity, equity, transparency, adaptability) to guide institutional policy. The piece calls for sandboxing new tools, centring student support and human judgment in AI adoption.

Key Points

  • The narrative needs to move from detection and restriction to development and support of AI in learning.
  • Independent research found a positive link between guided AI use and student attainment/retention, especially for non-traditional learners.
  • AI should be framed as a conduit (like projectors) rather than a replacement of teaching/learning.
  • A values-based framework is needed: academic integrity, equity, transparency, responsibility, resilience, empowerment, adaptability.
  • Universities should use “sandboxing” (controlled testing) and robust governance rather than blanket bans.

Keywords

URL

https://www.hepi.ac.uk/2025/10/03/from-detection-to-development-how-universities-are-ethically-embedding-ai-for-learning/

Summary generated by ChatGPT 5


Something Wicked This Way Comes

by Jim O’Mahony, SFHEA – Munster Technological University

A disheveled male university professor with gray hair and glasses, wearing a tweed jacket, kneels on the floor in his office, looking under a beige armchair with a panicked expression while holding his phone like a remote. A stack of books, a globe, and a whiteboard with equations are visible. Image generated by Nano Banana
The true test of a professor’s intelligence: finding the lost remote control. Image generated by Nano Banana

I remember as a 7-year-old having to hop off the couch at home to change the TV channel. Shortly afterwards, a futuristic-looking device called a remote control was placed in my hand, and since then I have chosen its wizardry over my own physical ability to operate the TV. Why wouldn’t I? It’s reliable, instant, multifunctional, compliant and most importantly… less effort.

The Seduction of Less Effort

Less effort……….as humans, we’re biologically wired for it. Our bodies will always choose energy saving over energy-consuming, whether that’s a physical instance or a cognitive one. It’s an evolutionary aid to conserve energy.

Now, my life hasn’t been impaired by the introduction of a remote control, but imagine for a minute if that remote control replaced my thinking as a 7-year-old rather than my ability to operate a TV. Sounds fanciful, but in reality, this is exactly the world in which our students are now living in.

Within their grasp is a seductive all-knowing technological advancement called Gen AI, with the ability to replace thinking, reflection, metacognition, creativity, evaluative judgement, interpersonal relationships and other richly valued attributes that make us uniquely human.

Now, don’t get me wrong, I’m a staunch flag bearer for this new age of Gen AI and can see the unlimited potential it holds for enhanced learning. Who knows? Someday, it may even solve Bloom’s 2 sigma problem through its promise of personalised learning?

Guardrails for a New Age

However, I also realise that as the adults in the room, we have a very narrow window to put sufficient guardrails in place for our students around its use, and urgent considered governance is needed from University executives.  

Gen AI literacy isn’t the most glamorous term (it may not even be the most appropriate term), but it encapsulates what our priority as educators should be. Learn what these tools are, how they work, what are the limitations, problems, challenges, pitfalls, etc, and how can we use them positively within our professional practice to support rather than replace learning?

Isn’t that what we all strive for? To have the right digital tools matched with the best pedagogical practices so that our students enter the workforce as well-rounded, fully prepared graduates – a workforce by the way, that is rapidly changing, with more than 71% of employers routinely adopting Gen AI 12 months ago (we can only imagine what it is now).

Shouldn’t our teaching practices change then, to reflect the new Gen AI-rich graduate attributes required by employers? Surely, the answer is YES… or is it? There is no easy answer – and perhaps no right answer. Maybe we’ve been presented with a wicked problem – an unsolvable situation where some crusade to resist AI, and others introduce policies to ‘ban the banning’ of AI! Confused anyone?

Rethinking Assessment in a GenAI World

I believe a common-sense approach is best and would have us reimagine our educational programmes with valid, secure and authentic assessments that reward learning both with and without the use of Gen AI.

Achieving this is far from easy, but as a starting point, consider a recent paper from Deakin University, which advocates for structural changes to assessment design along with clearly communicated instructions to students around Gen AI use.

To facilitate a more discursive approach regarding reimagined assessment protocols, some universities are adopting ‘traffic light systems’ such as the AI Assessment scale, which, although not perfect (or the whole solution), at least promotes open and transparent dialogue with students about assessment integrity – and that’s never a bad thing.

The challenge will come from those academics who resist the adoption of Gen AI in education. Whether their reasons relate to privacy, environmental issues, ethics, inherent bias, AGI, autonomous AI or cognitive offloading concerns (all well-intentioned and entirely valid by the way), Higher Ed debates and decision making around this topic in the coming months will be robust and energetic.

Accommodating the fearful or ‘traditionalist educators’ who feel unprepared or unwilling to road-test Gen AI should be a key part of any educational strategy or initiative. Their voices should be heard and their opinions considered – but in return, they also need to understand how Gen AI works.

From Resistance to Fluency

Within each department, faculty, staffroom, T&L department – even among the rows of your students, you will find early adopters and digital champions who are a little further along this dimly lit path to Gen AI enlightenment. Seek them out, have coffee with them, reflect on their wisdom and commit to trialling at least one new Gen AI tool or application each week – here’s a list of 100 to get you started. Slowly build your confidence, take an open course, learn about AI fluency, and benefit from the expertise of others.

I’m not encouraging you to be an AI evangelist, but improving your knowledge and general AI capabilities will make you better able to make more informed decisions for you and your students.

Now, did anyone see where I left the remote control?

Jim O’Mahony

University Professor | Biotechnologist | Teaching & Learning Specialist
Munster Technological University

I am a passionate and enthusiastic University lecturer with over 20 years experience of designing, delivering and assessing undergraduate and postgraduate programmes. My primary focus as an academic is to empower students to achieve their full potential through innovative educational strategies and carefully designed curricula. I embrace the strategic and well-intentioned use of digital tools as part of my learning ethos, and I have been an early adopter and enthusiastic advocate of Artificial Intelligence (AI) as an educational tool.


Links

Jim also runs a wonderful newsletter on LinkedIn
https://www.linkedin.com/newsletters/ai-simplified-for-educators-7366495926846210052/

Keywords


Colleges and Schools Must Block and Ban Agentic AI Browsers Now. Here’s Why


A group of students and a teacher in a library setting, with a prominent holographic display showing a red "blocked" symbol over an internet browser interface, symbolising the banning of agentic AI. Image (and typos) generated by Nano Banana.
The rise of agentic AI browsers presents new challenges for educational institutions. This image illustrates the urgent need for colleges and schools to implement blocking and banning measures to maintain academic integrity and a secure learning environment. Image (and typos) generated by Nano Banana.

Source

Forbes

Summary

Aviva Legatt warns that “agentic AI browsers” — tools able to log in, navigate, and complete tasks inside learning platforms — pose immediate risks to education. Unlike text-only AI, these can impersonate students or instructors, complete quizzes, grade assignments, and even bypass security like two-factor authentication. This creates threats not just of cheating but of data breaches and compliance failures under U.S. federal law. Faculty report “vaporised learning” when agents replace the effort needed to learn. Legatt urges institutions to block such browsers now, redesign assessments to resist automation, and treat agentic AI as an enterprise-level governance and security issue.

Key Points

  • Agentic browsers automate LMS tasks: logging in, completing quizzes, grading, posting feedback.
  • Risks extend beyond cheating to credential theft, data compromise, and federal compliance breaches.
  • Experiments show guardrails are easily bypassed, allowing unauthorised access and impersonation.
  • Faculty adapt by shifting to oral defences, handwritten tasks, and requiring drafts/reflections.
  • Recommended response: block tools, redesign assessments, embed governance, invest in AI literacy.

Keywords

URL

https://www.forbes.com/sites/avivalegatt/2025/09/25/colleges-and-schools-must-block-agentic-ai-browsers-now-heres-why/

Summary generated by ChatGPT 5


Generic AI cannot capture higher education’s unwritten rules


Five academics, dressed in business attire, are seated around a chess board on a wooden table in a traditional library, with books and papers. Above them, a large holographic screen displays 'AI - UNWRITTEN RULES: ACCESS DENIED' and 'CONTEXTUAL NUANCE: UNA'ILABLE', surrounded by data. Two thought bubbles above the central figure read 'HUMAN SHARED UNDERSTAIN' and 'SHARE 'ID UNDERSTANHINP'. The scene symbolizes AI's inability to grasp the subtle, unwritten rules of higher education. Generated by Nano Banana.
While AI excels at processing explicit data, it fundamentally struggles to grasp the nuanced, ‘unwritten rules’ that govern higher education. This image illustrates the critical gap where generic AI falls short in understanding the complex social, cultural, and contextual intricacies that define the true academic experience, highlighting the irreplaceable value of human intuition and shared understanding. Image (and typos) generated by Nano Banana.

Source

Wonkhe

Summary

Kurt Barling argues that universities operate not only through formal policies but via tacit, institution-specific norms—corridor conversations, precedents, traditions—that generic AI cannot perceive or replicate. Deploying off-the-shelf AI tools risks flattening institutional uniqueness, eroding identity and agency. He suggests universities co-design AI tools that reflect their values, embed nuance, preserve institutional memory, and maintain human oversight. Efficiency must not come at the cost of hollowing out culture, or letting external systems dictate how universities function.

Key Points

  • Universities depend heavily on tacit norms and culture—unwritten rules that guide decisions and practices.
  • Generic AI, based on broad datasets, flattens nuance and treats institutions as interchangeable.
  • If universities outsource decision-making to black-box systems, they risk losing identity and governance control.
  • A distributed “human-assistive AI” approach is preferable: systems that suggest, preserve memory, and stay under human supervision.
  • AI adoption must not sacrifice culture and belonging for efficiency; sector collaboration is needed to build tools aligned with institutional values.

Keywords

URL

https://wonkhe.com/blogs/generic-ai-cannot-capture-higher-educations-unwritten-rules/

Summary generated by ChatGPT 5