When Reddit threads light up with warnings, it’s rarely without reason. This week, PlayStation players urged others to avoid a suspicious title called “I’m Not a Human Horror Simulator,” calling it a complete scam.
The game reportedly appeared on the PlayStation Store as an indie horror experience but was later accused of false advertising, broken gameplay, and exploitative monetization.
Players on the r/PlayStation subreddit quickly gathered evidence, sharing screenshots and poor-quality gameplay snippets. Most reports claimed the game contained near-empty environments, repetitive sound effects, and minimal interaction.
Some even reported that the game crashed after a few minutes, prompting concerns that it may have been rushed onto the store without any real quality checks.
A Reddit post titled “ATTENTION ALL PLAYSTATION USERS DO NOT BUY” quickly gained traction, reaching thousands of upvotes. Many users claimed the developer’s description misled buyers by promising a full story-driven horror experience.
Instead, what players downloaded appeared to be unfinished or deliberately misleading content sold for real money. The anger came not just from the poor game quality but from Sony’s apparent lack of curation and oversight.
While digital marketplaces like the PlayStation Store are flooded with smaller indie games, the issue here sits deeper. Players began to question how such a misleading product even passed Sony’s content verification process.
In the discussion, comparisons to mobile app store scams became a recurring theme, showing that players’ trust in PlayStation’s quality gatekeeping had begun to erode.
Sony’s New Review System Backfires
Ironically, this controversy erupted as Sony rolled out its new long-form PS Store review system. For the first time, players could write full-length reviews for PS4 and PS5 games directly on the PS Store website.
The feature was meant to empower the community, allowing players to share detailed feedback rather than just rating games with stars.
However, within just days of the rollout, issues began surfacing. PushSquare reported that the PSN front page was now showing incomplete, spammy, and low-effort user reviews
Players pointed out that Sony lacked proper filtering tools, allowing spam and irrelevant posts to surface even on major game listings.
This problem became painfully obvious when unrelated or incoherent reviews for unrelated titles appeared under trending games. What should have been a step toward community engagement turned chaotic.
Critics noted that this was a missed opportunity for Sony to improve store credibility, especially after several incidents involving deceptive indie games. Instead, the system appeared rushed and unprepared to handle large-scale user-generated content (UGC).
The lack of consistency in approval criteria made some suspect that moderation was either fully automated or handled without clear review guidelines.
Meanwhile, games like “I’m Not a Human Horror Simulator” slipped through these same gaps. It created a cycle: low-quality games flooding the store, frustrated players leaving emotional reviews, and the platform promoting these reviews without proper moderation.
For a brand long associated with polished, quality gaming experiences, this was a reputational setback.
Xbox’s Transparent Rules Expose Sony’s Blind Spots
The contrast between Sony and Microsoft could not be clearer. Microsoft has long enforced explicit guidelines for developers and user-generated content through its official Microsoft Store UGC policy. These rules clearly outline safety measures, disclosure requirements, and restrictions for sensitive or misleading content.

PS5 Storage Error Message Screenshot (Credit: Reddit)
For example, the Xbox team mandates full content review for any user-uploaded asset, along with disclosure if AI or third-party elements are involved.
It also limits deceptive advertising by ensuring titles and screenshots accurately represent gameplay. This active verification system prevents fake or poor-quality games from making it to the front page.
By comparison, Sony’s content policies remain opaque. Developers can publish digital titles on PSN through minimal checks as long as technical submission tests pass. The platform’s main focus appears to be whether the game runs on hardware, not whether it delivers an authentic experience or misleads customers.
Others pointed out that the lack of accountability not only damages player trust but also hurts genuine indie creators trying to release legitimate projects.
With modern gaming relying heavily on digital marketplaces, store integrity is becoming as important as console performance. Xbox’s curated environment has built noticeable player confidence, while Sony’s more relaxed approach seems to invite repeated controversy.
Players Demand Accountability and Refund Choices
Following the backlash, more users began calling for better refund support from Sony. The company’s existing refund rules require that the game not be downloaded or streamed, which makes it nearly impossible to claim a refund once installed.
This policy sparked fresh anger from players who felt trapped after purchasing fraudulent titles.
Some suggested a reform similar to Steam’s two-hour playtime refund rule, which allows players to test and return games within a limited timeframe.
For many, that would be a fair safety net against low-effort or scam projects. Others proposed that Sony should temporarily suspend developers proven to mislead buyers until further internal review.
So far, Sony has not issued an official statement regarding the ongoing “I’m Not a Human Horror Simulator” incident. However, the public reaction suggests that the pressure is mounting. Several influencers and gaming journalists are now amplifying the issue, linking it to Sony’s lack of enforcement and transparency.
The outrage shows how digital distribution has transformed not only how players access games but also how fast they can expose platform weaknesses. While Sony faces growing criticism, players continue demanding better consumer protection, clearer guidelines, and faster responses to scam reports.
The introduction of long-form reviews might have been intended as a trust-building measure, but it now highlights deep cracks in Sony’s store infrastructure.
What’s Next for Sony
The controversy serves as a warning: unchecked curation gaps can quickly destroy platform confidence. If Sony does not address the influx of misleading titles and unmoderated reviews, players might seek safer alternatives or call for regulatory intervention.
With Microsoft already leading in transparency and player protection, Sony can no longer rely on brand loyalty alone to maintain user trust.
A sustainable fix would involve a multi-step plan, stronger pre-release content checks, verified UGC review teams, and clear refund mechanisms. These measures would not just restore trust but also protect genuine indie developers from being overshadowed by scams.
At the heart of this issue lies one simple demand from the community: accountability. Players don’t expect perfection, but they do expect honesty, clarity, and fair treatment. As digital marketplaces keep expanding, the companies that prioritize these values will likely be the ones who keep their gamers’ loyalty for years to come.
Microsoft has officially raised the bar for how developers handle user‑generated content (UGC) and AI tools in games. Through its recently updated XR‑018: User Generated Content policy, the Xbox division now provides one of the most explicit sets of rules in the gaming industry for moderating player‑created material.
The document, last revised in March 2025, defines UGC broadly, covering text, images, videos, mods, character names, custom objects, and any form of AI‑assisted content that becomes visible to others online.
For developers, compliance is not optional. Any game featuring such content must offer players the ability to report inappropriate or harmful material directly in‑game.
This reporting must be tied to an actual moderation process and include visible feedback when offensive content is blocked or removed.
Microsoft also encourages developers to deploy proactive detection systems, such as text filtering through its StringService API, to automatically identify banned words and harmful expressions before they reach another player’s screen.
The goal is twofold: protect players from harmful material and reduce moderation delays that can degrade community experiences.
Importantly, the guidelines reflect a recognition of how generative AI is transforming creation tools, and Microsoft’s focus is on ensuring that AI‑assisted content doesn’t become a vector for abuse or misinformation.
According to the document, developers must also respect player privacy and restriction settings. If a user has UGC privileges turned off, they should never see or access third‑party creations.
Instead, developers must substitute safe placeholder content or lock affected game modes, only if necessary. Blocking entire modes is labeled as a “last resort,” reinforcing Microsoft’s commitment to inclusive play without compromising safety.
Developers Face Stronger Accountability Under Xbox Policy
Microsoft’s approach is distinct because it merges player protection with a structured process of accountability for developers. The policy requires studio teams to implement clear content guidelines, such as terms of use or codes of conduct, that are easily accessible in‑game or on official websites.
This means that developers cannot rely solely on Microsoft’s platform moderation; they must maintain their own enforcement tools and take action when flagged content violates policy.
Supporting transparency, the policy lists practical requirements such as:
- Logging detailed reports that include timestamps, user IDs, and evidence.
- Sending confirmation to players when their report has been received.
- Explaining why a specific UGC was removed or disabled.
- Ensuring offensive or copyrighted material cannot reappear.
This level of operational detail aims to eliminate gaps between player complaints and developer action. It effectively places the responsibility of moderation inside every game that uses UGC systems, instead of centralizing it at the store level.
These clauses were written with the rise of AI editing and generative content tools in mind, which have introduced new layers of complexity in UGC management.
The policy reflects Microsoft’s understanding that AI assistance can speed up creativity but also poses risks of inappropriate auto‑generated outputs. Hence, developers must disclose when something was generated by AI and take legal responsibility for what is published under their game’s name.
Broader Store Policy and Transparency Shifts
Microsoft’s commitment to transparency doesn’t stop at game content. Its Windows Store policy change log for 2025 lists several recent clarifications to ensure uniform treatment of user‑generated features across both Xbox and PC ecosystems.
These include definitions for AI‑generated submissions, moderation timeframes, and extended data reporting obligations for developers handling community uploads.
These updates reinforce the company’s holistic approach to safety. Rather than treating each product in isolation, Microsoft wants all UGC guidelines under the same framework, no matter whether content appears on an Xbox console, PC, or within the new Xbox Cloud Gaming ecosystem.

Developer Moderation Workflow for User-Generated Content (Credit: Microsoft)
Critics in the gaming industry have noted that this cross‑platform consistency helps Microsoft avoid the confusion that often plagues digital stores. Developers can read one document and know exactly what’s expected, something that cannot currently be said for Sony’s PlayStation Network or Nintendo’s eShop.
The inclusion of AI and mod‑related content is particularly significant. Microsoft’s documentation acknowledges that some games now rely on procedural generation, community prompts, or ML‑assisted design. By extending UGC rules to AI assets, Microsoft is taking a proactive stance that reflects where game creation is heading.
Sony and Nintendo Lag Behind on Transparency
Sony and Nintendo, meanwhile, remain far more secretive about their internal review processes. Neither company publishes a detailed UGC or AI disclosure policy comparable to Xbox’s XR‑018 documentation.
Developers and players alike often complain that the PlayStation and eShop submission requirements are opaque, offering little public insight into how AI or generative tools should be managed.
For Sony, recent controversies surrounding misleading PlayStation Store games have placed additional pressure on its content review pipeline. While developers must still pass technical certification, no formal public guidelines outline how Sony handles AI‑driven content or moderation reporting systems.
This lack of transparency has led to confusion about whether AI artwork, scripts, or mod support could violate platform rules.
Nintendo’s process, historically conservative, offers little guidance either. Its developer documentation focuses mainly on gameplay compliance and user safety for younger audiences, yet says nothing explicit about AI asset integration or mod‑based UGC sharing.
Given how community creativity defines modern gaming ecosystems, from sandbox editors to generative companion tools, the absence of published policies could soon undermine player confidence in those platforms.
The comparison highlights Microsoft’s strategic advantage in public communication. Its willingness to document rules with examples, testing procedures, and pass/fail cases creates clarity for all parties involved. For smaller studios, this transparency removes guesswork; for players, it builds trust.
Why Industry-Wide Clarity Matters
Gamers today are not just consumers; they’re creators and curators. From AI‑generated skins to intricate community mods, user content extends a game’s lifespan and expands its cultural reach.
However, that creative freedom comes with responsibility. Without clear oversight, harmful, exploitative, or stolen work can propagate at scale.
Microsoft’s new approach acknowledges this reality. By giving developers explicit instructions on how to detect, review, and report harmful material, it creates a structure where creativity thrives safely. UGC moderation, once treated as an afterthought, is now a built‑in requirement of ethical game development.
If Sony and Nintendo want to maintain parity in player trust, they will need to release similar AI and UGC policies publicly. Silence on these topics can no longer be justified by brand reputation alone. Developers increasingly expect formal instructions, especially as artificial intelligence continues shaping creative pipelines.
What’s clear now is that Microsoft has not only written rules but also established an industry benchmark. By linking proactive detection systems, detailed moderation processes, and clear player communication, it sets a model that other platforms cannot ignore for long.
As UGC and AI content continue to influence gaming, transparency will define which companies are ready for the next generation of digital creation.