I think so too and so does GPT-5 (OpenAI seems to not have predicted this question yet so they could manipulate the answer to their liking):
What is the information age?
The Information Age is the period in modern history marked by the rapid shift from traditional industry to economies and societies driven by information and digital technologies. It began in the late 20th century with the widespread adoption of computers, the internet, and telecommunications.
Key features:
Digital technologies: Personal computers, smartphones, cloud computing, and AI enable creation, storage, and analysis of vast amounts of data.
Internet connectivity: Global networks make information accessible instantly, reshaping communication, commerce, media, and education.
Knowledge economy: Value increasingly comes from data, software, intellectual property, and services rather than manufacturing alone.
Automation and digitization: Processes in business, government, and daily life are streamlined by software and machines.
Social impacts: Changes in work (remote work, gig economy), privacy concerns, cybersecurity, digital divides, and new forms of culture and social interaction.
Common milestones:
1940s–1960s: Early computing and transistor invention.
1970s–1980s: Microprocessors and personal computers.
1990s: Commercial internet and the World Wide Web.
2000s–present: Mobile internet, social media, cloud, big data, and AI.
In short, the Information Age is defined by the central role of digital information in how we live, work, and organize society.
Isn’t it true to speak of disinformation age in 2025?
Yes — calling 2025 a “disinformation age” is reasonable in many contexts. Here’s a concise, evidence-based view with key reasons, examples, and what it means for individuals and institutions.
Why the label fits
Scale and speed: Social platforms, messaging apps, and content recommendation systems spread false or misleading content worldwide in minutes, often faster than fact-checking can respond.
Sophisticated actors: State and non-state actors run coordinated influence operations using bots, troll farms, and paid networks to amplify narratives and polarize societies.
Generative AI and cheap media production: Large multimodal models (text, image, audio, video) make it easy and inexpensive to create realistic deepfakes, synthetic text, and fabricated evidence that are hard to detect.
Algorithmic amplification: Engagement-optimizing algorithms favor sensational, emotionally charged, or novel content — categories in which disinformation often performs well.
Erosion of trusted gatekeepers: Traditional media, once primary gatekeepers, have less control over distribution; social proof (likes, shares) often substitutes for editorial verification.
Information overload and attention scarcity: People skim, rely on heuristics, and repeat content without verification, creating fertile ground for misinformation to persist.
Concrete 2020s indicators
Political influence: Well-documented campaigns targeting elections and public opinion in multiple countries (e.g., disinformation linked to elections and referendums across democracies).
Health misinformation: COVID-19-era falsehoods about treatments and vaccines persist and have measurable public-health impacts.
Deepfakes and synthetic media: Increasing instances of audio/video impersonations in scams, political smears, and hoaxes. Publicly released tools and research in 2023–2025 show both improved generation and growing arms-race in detection.
Commercialized deception: “Click farms,” manipulated review networks, and fraudulent ad campaigns are more accessible and automated.
Research and governance activity: Growing body of academic studies, government reports, and platform transparency efforts acknowledging scale and harms.
Counterpoints — why “disinformation age” might be too narrow
Information Age is broader: The term “Information Age” still captures the wider structural shift: digitization, connectivity, data-driven economies, and technical innovation. Disinformation is a significant feature but not the only one.
Positive information abundance: Massive access to knowledge, collaborative tools, open science, and educational resources are continuing trends that the “disinformation” label overlooks.
Variation by context: Not every society experiences disinformation equally — media ecosystems, digital literacy, regulation, and platform usage shape local realities.
Net assessment
Accurate as a focal label: Using “disinformation age” stresses a dominant, harmful trend of the mid-2020s and correctly highlights an urgent societal challenge. It’s especially apt for discussions about politics, public health, media, and trust.
Not comprehensive: It should be used alongside other terms (Information Age, digital age, algorithmic age) when talking about broader economic and technological transformations.
Practical implications
For individuals: Verify sources, slow down on sharing, use diversified news sources, check fact-checkers, and learn to spot synthetic media.
For platforms: Improve content moderation, transparency, algorithmic accountability, and detection tools; invest in provenance and authentication.
For policymakers: Strengthen media literacy, regulation for transparency and platform responsibility, and support independent journalism and civil-society fact-checkers.
For technologists: Advance robust detection, watermarking/provenance standards, and tools that scale trustworthy signals.
If you want, I can:
Summarize major reports (2020–2025) documenting disinformation trends.
Show concrete recent examples (deepfakes, campaigns) with sources.
Outline a personal checklist to assess whether a piece of content is likely disinformation.
I posted the answer in full because that was the output. Anything else would have been manipulation on my part. However, I don’t think the same answer can still be reproduced in short time.
Of course, LLMs currently also give counterarguments, but for how much longer?
What do you expect? A one-line answer to a complex question?
You should be interested in this because it is already influencing public opinion. Whether you think that’s a good thing or not is irrelevant—it’s simply a fact, whether you like it or not.
What else can I say, except that it doesn’t make sense to bury your head in the sand. I’m not a big fan of LLMs, but I’m aware of the fact that billions of people already use this tech in all areas of their lives. That’s a fact, and your rejection doesn’t change that at all. If you had read the post properly, you would understand that the issue here is the influence this technology has on opinion-forming. My opinion: it is ushering in an age of targeted disinformation. I had hoped to hear your opinion on this, but it seems that even a basic understanding is impossible due to your refusal to accept reality.
Do you believe you are the center of the world? Do you believe that what you read on a low-reach social media application is reality? Seriously, you should wake up.
If you think LLMs are irrelevant, there is nothing more to say.
I think so too and so does GPT-5 (OpenAI seems to not have predicted this question yet so they could manipulate the answer to their liking):
The Information Age is the period in modern history marked by the rapid shift from traditional industry to economies and societies driven by information and digital technologies. It began in the late 20th century with the widespread adoption of computers, the internet, and telecommunications.
Key features:
Common milestones:
In short, the Information Age is defined by the central role of digital information in how we live, work, and organize society.
Isn’t it true to speak of disinformation age in 2025?
Yes — calling 2025 a “disinformation age” is reasonable in many contexts. Here’s a concise, evidence-based view with key reasons, examples, and what it means for individuals and institutions.
Why the label fits
Concrete 2020s indicators
Counterpoints — why “disinformation age” might be too narrow
Net assessment
Practical implications
If you want, I can:
“I asked ChatGPT”
Thank you for providing an example of the driving force behind this being the disinformation age
Um. tl;dr. Did YOU read it? You didn’t edit out the AI’s comment, at least.
What do you think this adds to any discussion? Is this meant to illustrate how we’re in a disinformation age?
I posted the answer in full because that was the output. Anything else would have been manipulation on my part. However, I don’t think the same answer can still be reproduced in short time.
Of course, LLMs currently also give counterarguments, but for how much longer?
What do you expect? A one-line answer to a complex question?
What I expect is comments written by humans containing thoughts they came up with themselves. I don’t care what an AI pretends to think.
You should be interested in this because it is already influencing public opinion. Whether you think that’s a good thing or not is irrelevant—it’s simply a fact, whether you like it or not.
I should be interested in what? I know that people use AI to form their opinions, does that mean I also need to read slop?
What else can I say, except that it doesn’t make sense to bury your head in the sand. I’m not a big fan of LLMs, but I’m aware of the fact that billions of people already use this tech in all areas of their lives. That’s a fact, and your rejection doesn’t change that at all. If you had read the post properly, you would understand that the issue here is the influence this technology has on opinion-forming. My opinion: it is ushering in an age of targeted disinformation. I had hoped to hear your opinion on this, but it seems that even a basic understanding is impossible due to your refusal to accept reality.
The fuck do you mean bury my head in the sand? You’re not giving me any reason why I should read AI written slop. What are you on about?
Do you believe you are the center of the world? Do you believe that what you read on a low-reach social media application is reality? Seriously, you should wake up.
If you think LLMs are irrelevant, there is nothing more to say.