The Hidden Dangers Behind the Ease of Working with AI

Teknologi Terkini - Posted on 02 October 2025 Reading time 5 minutes

The advent of generative artificial intelligence (AI), initially expected to accelerate and simplify work, has instead created new challenges. Rather than boosting efficiency, the flood of AI-generated content, such as from ChatGPT or Gemini, is seen as an added burden for employees.

 

This phenomenon is called workslop, a combination of work and slop, describing the explosion of shallow content that overwhelms organizational workflows. A Harvard Business Review (HBR) report, together with Stanford and BetterUp, notes that the main problem is no longer a lack of information but too much low-value information. “What was intended to help now hinders,” HBR states.

 

AI-generated documents often appear polished externally but are confusing upon reading, causing readers to spend time just interpreting the meaning. Research by Stanford Social Media Lab and BetterUp Labs found that 40% of professional workers in the U.S. received such content in the past month, a phenomenon termed workslop.

 

Unlike earlier fears of machines replacing humans, workslop marks an era in which humans use machines to “dump” work and cognitive load onto colleagues. AI documents may look clean but lack context and add mental burden. Each workslop instance can take 1–2 hours to fix or rewrite, and a company with 10,000 employees could lose over $9 million per year due to this “digital waste.”

 

The impacts are twofold: wasted time verifying information and weakened decision-making quality due to shallow or biased data. Beyond reducing productivity, workslop also causes mental fatigue, as employees feel pressured to respond to an overwhelming stream of largely useless information. Stanford–BetterUp surveys show that one in three respondents avoid collaborating with colleagues who frequently send workslop, eroding trust and team collaboration.

 

HBR emphasizes that while the volume of documents, memos, and presentations increases, decision quality does not improve. Meetings often take longer because managers must sift through false information before reaching the core issues.

 

Harvard researchers classify AI users into two types: “passengers”, who use AI shortcuts without verifying quality, and “pilots”, who provide clear context, review outputs, and use only relevant parts. BetterUp writes, “Workslop is easy to produce but costly to clean up. What is thought to be a shortcut can become a dead-end for the team.”

 

This trend marks a shift in modern organizational challenges: from speeding access to information to ensuring information quality for decision-making. Without curation and digital literacy, generative AI can create large-scale residual content that harms productivity and competitiveness. Quality standards and human verification are essential for AI to function as an assistant, not a burden.

 

HBR stresses, “Content volume does not automatically equal quality. Without curation, organizations are trading efficiency for information overload.” Organizations seeking to survive must go beyond the “more, faster” mindset and focus on careful selection and filtering. Without this, AI’s promise of efficiency may become a trap, drowning workers in superficially sophisticated but ultimately empty content.

Source: kompas.com

What do you think about this topic? Tell us what you think. Don't forget to follow Digivestasi's Instagram, TikTok, Youtube accounts to keep you updated with the latest information about economics, finance, digital technology and digital asset investment.

 

DISCLAIMER

All information contained on our website is summarized from reliable sources and published in good faith and for the purpose of providing general information only. Any action taken by readers on information from this site is their own responsibility.