Latest Posts

How generative AI is really changing education – by outsourcing the production of knowledge to Big Tech


A classroom-like setting inside a large server room, where students in white lab coats sit at desks with holographic screens projected from their devices. Above them, a neon sign blue logo glows for 'BIG TECH EDUSYNC' is prominent. The students have blank expressions, connected by wires. The scene criticises the outsourcing of education to technology. Generated by Nano Banana.
The rise of generative AI is fundamentally reshaping education, leading to concerns that the critical process of ‘knowledge production’ is being outsourced to Big Tech companies. This image visualises a future where learning environments are dominated by AI, raising questions about autonomy, critical thinking, and the ultimate source of truth in an AI-driven academic landscape. Image (and typos) generated by Nano Banana.

Source

The Conversation

Summary

This article argues that generative AI is shifting the locus of knowledge production from academic institutions into the hands of Big Tech platforms. As students and educators increasingly rely on AI tools, the power to define what counts as knowledge — what is true, what is cited, what is authoritative — is ceded to private firms. That shift risks marginalising critical thinking, local curricula, and disciplinary expertise. The author calls for reclaiming epistemic authority: universities must define their own ways of knowing, educate students not just in content but in evaluative judgement, and negotiate more equitable relationships with AI platforms so that academic integrity and autonomy aren’t compromised.

Key Points

  • Generative AI tools increasingly mediate how knowledge is accessed, curated and presented—Big Tech becomes a gatekeeper.
  • Reliance on AI may weaken disciplinary expertise and the role of scholars as knowledge producers.
  • Students may accept AI outputs uncritically, transferring trust to algorithmic systems over faculty.
  • To respond, higher education must build student literacy in epistemics (how we know what we know) and insist AI remain assistive, not authoritative.
  • Universities should set policy, technical frameworks, and partnerships that protect research norms, attribution, and diverse knowledge systems.

Keywords

URL

https://theconversation.com/how-generative-ai-is-really-changing-education-by-outsourcing-the-production-of-knowledge-to-big-tech-263160

Summary generated by ChatGPT 5


Generic AI cannot capture higher education’s unwritten rules


Five academics, dressed in business attire, are seated around a chess board on a wooden table in a traditional library, with books and papers. Above them, a large holographic screen displays 'AI - UNWRITTEN RULES: ACCESS DENIED' and 'CONTEXTUAL NUANCE: UNA'ILABLE', surrounded by data. Two thought bubbles above the central figure read 'HUMAN SHARED UNDERSTAIN' and 'SHARE 'ID UNDERSTANHINP'. The scene symbolizes AI's inability to grasp the subtle, unwritten rules of higher education. Generated by Nano Banana.
While AI excels at processing explicit data, it fundamentally struggles to grasp the nuanced, ‘unwritten rules’ that govern higher education. This image illustrates the critical gap where generic AI falls short in understanding the complex social, cultural, and contextual intricacies that define the true academic experience, highlighting the irreplaceable value of human intuition and shared understanding. Image (and typos) generated by Nano Banana.

Source

Wonkhe

Summary

Kurt Barling argues that universities operate not only through formal policies but via tacit, institution-specific norms—corridor conversations, precedents, traditions—that generic AI cannot perceive or replicate. Deploying off-the-shelf AI tools risks flattening institutional uniqueness, eroding identity and agency. He suggests universities co-design AI tools that reflect their values, embed nuance, preserve institutional memory, and maintain human oversight. Efficiency must not come at the cost of hollowing out culture, or letting external systems dictate how universities function.

Key Points

  • Universities depend heavily on tacit norms and culture—unwritten rules that guide decisions and practices.
  • Generic AI, based on broad datasets, flattens nuance and treats institutions as interchangeable.
  • If universities outsource decision-making to black-box systems, they risk losing identity and governance control.
  • A distributed “human-assistive AI” approach is preferable: systems that suggest, preserve memory, and stay under human supervision.
  • AI adoption must not sacrifice culture and belonging for efficiency; sector collaboration is needed to build tools aligned with institutional values.

Keywords

URL

https://wonkhe.com/blogs/generic-ai-cannot-capture-higher-educations-unwritten-rules/

Summary generated by ChatGPT 5


Study finds ChatGPT-5 is wrong about 1 in 4 times – here’s the reason why


A digital, transparent screen is displaying a large, bold '25%' error rate in red, with a smaller '1 in 4' below it. A glowing blue icon of a brain with a gear inside is shown on the left, while a red icon of a corrupted data symbol is on the right. A researcher in a lab coat is looking at the screen with a concerned expression.  The scene visually represents the unreliability of a generative AI. Generated by Nano Banana.
While generative AI tools like ChatGPT-5 offer powerful capabilities, new research highlights a critical finding: they can be wrong as often as one in four times. This image captures the tension between AI’s potential and its inherent fallibility, underscoring the vital need for human oversight and fact-checking to ensure accuracy and reliability. Image (and typos) generated by Nano Banana.

Source

Tom’s Guide

Summary

A recent OpenAI study finds that ChatGPT-5 produces incorrect answers in roughly 25 % of cases, due largely to the training and evaluation frameworks that penalise uncertainty. Because benchmarks reward confident statements over “I don’t know,” models are biased to give plausible answers even when unsure. More sophisticated reasoning models tend to hallucinate more because they generate more claims. The authors propose shifting evaluation metrics to reward calibrated uncertainty, rather than penalising honesty, to reduce harmful misinformation in critical domains.

Key Points

  • ChatGPT-5 is wrong about one in four responses (~25%).
  • Training and benchmarking systems currently penalise hesitation, nudging models toward confident guesses even when inaccurate.
  • Reasoning-focused models like o3 and o4-mini hallucinate more often—they make more claims, so they have more opportunities to err.
  • The study recommends redesigning AI benchmarks to reward calibrated uncertainty and the ability to defer instead of guessing.
  • For users: treat AI output as tentative and verify especially in high-stakes domains (medicine, law, finance).

Keywords

URL

https://www.tomsguide.com/ai/study-finds-chatgpt-5-is-wrong-about-1-in-4-times-heres-the-reason-why

Summary generated by ChatGPT 5


AI-Generated “Workslop” Is Destroying Productivity


A chaotic office or data center environment filled with people at desks, surrounded by numerous screens displaying complex, overwhelming data and downward-trending graphs. A glowing red holographic display overhead reads 'AI-GENERATED 'WORKSLOP' PRODUCTIVTY: ZERO', with a prominent downward arrow. On the floor, papers are strewn everywhere, and a robotic arm appears to be spilling sparkling digital 'waste.' The scene visually represents how poorly managed AI outputs can destroy productivity. Generated by Nano Banana.
While AI promises efficiency, its unmanaged or poorly implemented output can lead to ‘workslop,’ a deluge of low-quality or irrelevant content that ironically destroys productivity. This image vividly portrays a chaotic scenario where AI-generated clutter overwhelms human workers, underscoring the critical need for careful integration and oversight to truly leverage AI’s benefits without drowning in its drawbacks. Image (and typos) generated by Nano Banana.

Source

Harvard Business Review

Summary

The article introduces “workslop” — AI-generated content (emails, memos, reports) that looks polished but lacks substance — and argues it undermines productivity. As organisations push employees to adopt AI tools, many are producing superficial, low-value outputs that require downstream repair or rewriting by others. The study suggests that while AI adoption has surged, few companies experience measurable productivity gains. The hidden cost of workslop is that the burden shifts to recipients, who must clarify, fix, or discard shallow AI outputs. For AI to add real value, its use must be paired with human review, prompt skill, and metrics focussed on outcomes rather than volume.

Key Points

  • “Workslop” is AI content that appears polished but fails to meaningfully advance a task.
  • Many organisations see limited return on their AI investments: activity without impact.
  • The cost of superficial AI output is borne by others, who must rework or reject it.
  • To counter workslop: review AI outputs, set expectations for quality, teach prompt & editing skills.
  • Value metrics should prioritise outcomes (impact, clarity) over sheer output volume.

Keywords

URL

https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity

Summary generated by ChatGPT 5


NSW public school students to get access to state-of-the-art generative AI app


A diverse group of cheerful public school students in a modern classroom is excitedly gathered around a teacher. The teacher holds a large, glowing tablet displaying a generative AI interface with a 'CREATE' icon. In the background, a large screen shows a variety of AI-generated content (images, text, music notes), and the Sydney skyline is visible through a large window. The scene symbolises public school students gaining access to advanced AI technology. Generated by Nano Banana.
In a significant step forward for public education, students in New South Wales are set to gain access to a state-of-the-art generative AI app. This image envisions a future classroom where students and teachers collaborate using powerful AI tools, highlighting a new era of learning and creativity in Australian schools. Image (and typos) generated by Nano Banana.

Source

CyberDaily.au

Summary

The New South Wales government in Australia is rolling out a generative AI app across public schools to support students in areas like writing, problem solving, and research. The aim is to help with learning and reduce educational inequality—particularly for those with fewer resources. Officials emphasise that the app will supplement—not replace—teaching, with controls in place to prevent outright cheating. Teachers will receive training on appropriate use, and the pilot includes oversight and evaluation to monitor impacts, equity, and risk.

Key Points

  • NSW public schools will gain access to a generative AI app intended as a learning support tool, not a replacement for instruction.
  • The rollout aims to reduce disparity: assist students who may lack advanced tutors, help with writing, research, structuring work.
  • Safeguards include teacher training, monitoring, and policies to restrict misuse or overreliance.
  • The government will pilot the programme to evaluate outcomes: learning improvements, equity effects, and unintended harms.
  • The introduction reflects a shift from resisting AI to integrating it thoughtfully at the school level.

Keywords

URL

https://www.cyberdaily.au/government/12672-nsw-public-school-students-to-get-access-to-state-of-the-art-generative-ai-app

Summary generated by ChatGPT 5