
2025 is already off to a great start for Factiverse with multiple appearances in highly regarded discussions that discuss the importance of our mission. MediaFutures, Dataconomy and even Forbes are reaching out to Factiverse to discuss.
To give you a TLDR summary of our features. These are what we discuss in our features.
- Forbes - A discussion on engagement-driven misinformation and the dangers of unchecked viral content
- MediaFutures - Addressing concerns around Meta’s decision to scale back content moderation and fact-checking procedures.
- Dataconomy - Key points and examples about AI failures in 2024 that cost businesses thousands of dollars.
Social Media’s Role in Spreading or Stopping Misinformation (Forbes Feature)

Our recent feature in Forbes discusses the issue that social media companies like Meta and X prioritize engagement over accuracy.
What this does however is allowing misinformation to spread faster than facts. Sensationalism, outrage and being the loudest you can possibly be is fuel for increased user interactions that are more often than not combative.
This effect of social media companies has an extremely polarization effect on users which in turn erodes trust in traditional media companies.
With Meta discontinuing its third-party fact-checking program, claiming concerns over censorship and bias is more of a shift to adhere to a new political landscape. The result being the removal of safeguards against misinformation.
Link to the full article: Here
Learning from the Biggest AI Mistakes (Dataconomy Feature)

As discussed in our previous blogs, AI models are the perpetrators of generating false or misleading information. On the surface this is seen to be a minor inconvenience but as our dependency on this type of technology grows, a minor mistake can lead to serious consequences.
This guest article by Maria Amelie, our CEO, details a series of examples where AI tech goes wrong. From chatbots confessing to crimes to stock market crashes, AI’s errors can range in the level of impact but they can be extremely dangerous.
We always advocate that AI should assist, not replace, human judgment. This is especially in high-stakes sectors like law, finance, and healthcare. One mistake in these sectors can ruin a businesses, careers and reputations all in one fell swoop.
Link to the full article: Here
Meta ends their rocky relationship with fact-checking (Media Futures Feature)

Meta ends fact-checking processes in their content moderation teams. Zuckerburg justifies this move as promoting free speech while organizations like Faktisk.no and BBC Verify warn that removing moderation could harm transparency and trust in news.
The consequences of misinformation are plain to see as it has fueled riots and vaccine hesitancy which are having devastating effects on society.
New solutions like Factiverse and Project Reynir are attempting to be the answer to social media’s unwillingness to perform fact-checks by gathering credible information and pushing forward narratives that are littered with misinformation.
Link to the full article: Here
.webp)