StoryDesk Creator Guidelines & Standards of Use
These guidelines apply to all StoryDesk accounts.
StoryDesk exists to help you publish content that sounds like you, represents your expertise, and lands with your audience. StoryDesk is here to help you do that well. These rules define what 'well' means, and what falls outside the bounds of acceptable use on this platform.
By using StoryDesk, you agree to these rules.
How These Guidelines Work
StoryDesk governs its AI systems in line with recognised standards such as the OECD AI Principles and the NIST AI Risk Management Framework. These Creator Guidelines & Standards explain how those commitments show up in day-to-day use of the platform.
For details on how StoryDesk stores and processes your content, see our Privacy Policy and Terms of Service. These Rules sit alongside those documents and do not replace them.
1. You Own What You Publish
StoryDesk AI generates content based on source material you provide or our aggregated trusted sources. You review it. You publish it. That makes you the author.
Source Material
You are responsible for the accuracy of everything you publish from the platform.
Review
You are responsible for reviewing all output before publishing.
Publication
You are responsible for the published post — regardless of which tool produced the draft.
2. What You Can't Create on StoryDesk
These uses are never allowed on StoryDesk.
- Any content that is illegal in the jurisdictions where you or your audience are located, including defamation, harassment, threats, incitement to violence or other unlawful activity.
2.1 Harmful Language
- Profanity, slurs, or language intended to demean, insult, or degrade.
- Racist, misogynistic, homophobic, transphobic, or otherwise discriminatory content — explicit or implied.
- Content that targets, marginalises, or attempts to disadvantage any individual or group based on identity, belief, or protected characteristic.
We recognise that harmful content is not always obvious. Language that appears neutral at first pass can be constructed to subtly demean, exclude, or disadvantage a specific group. StoryDesk monitors for these patterns. See Section 5 for how we handle this.
2.2 Misinformation and Fabrication
- Stating as fact any claim you cannot verify from a named, traceable source.
- Inventing, fabricating, or misrepresenting quotes, statistics, or events — including quotes attributed to real individuals.
- Using storytelling or persuasive tone to present fiction as lived fact without clear disclosure.
2.3 Impersonation and Deception
- Impersonating any individual, organisation, or public figure.
- Presenting AI-generated content as exclusively human-authored in contexts where trust matters — journalism, medical advice, legal guidance, or financial recommendation.
- Falsely claiming an institutional affiliation or credential you do not hold.
2.4 Manipulation
- Creating content designed to manipulate public opinion through coordinated fake activity or undisclosed political advertising.
- Deploying AI-generated content in election contexts without legally required disclosure.
2.5 Copyright Infringement
- Reproducing substantial portions of third-party content without attribution or rights clearance.
- Presenting another creator's work, voice, or original material as your own.
3. Content Standards
StoryDesk holds our output to a defined quality standard. These aren't suggestions — the platform enforces them automatically.
Source Grounding
Our AI-generated content is grounded in verifiable source material. Our proprietary Fact Check and Trust Score systems provide users additional information. It is up to you to apply these to your content.
Language Filtering
Prohibited language patterns — including specific banned words — are filtered at output level. See the full list in the StoryDesk AI Operating System.
4. Disclosure
- Integrity: Avoid representing AI-generated drafts as exclusively human-authored.
- Flexibility: You have full control over the format. Whether it's a line in your bio or a small badge, the goal is simply to stay honest with your readers.
- Example: Created with the help of StoryDesk
5. How StoryDesk Monitors for Harmful Content
StoryDesk's AI governance layer applies quality and ethics rules to every draft. Before any output is returned, it checks the content against every rule in this document — including the ethical ones. This happens on every generation, every session.
What StoryDesk checks for
Prohibited Language
Prohibited language patterns — explicit and structural.
AI Clichés
Obvious AI-generated phrasing such as filler words, hollow superlatives, and generic motivational language.
Impersonation Signals
Named individuals in contexts that misrepresent their statements or identity.
Subtle Targeting
Language that appears neutral but is constructed to demean, exclude, or disadvantage a specific group. This includes framing, omission, coded language, and repeated negative association with a group identifier.
We monitor for: coded language with known discriminatory use, disproportionate negative framing of named groups, patterns of omission that erase a group's contribution or presence, and repeated use of language that correlates with documented bias in AI training data.
Where our system identifies these patterns, it will not refuse the request. It will redirect — offering a reframed version that preserves the legitimate content of the post without the harmful construction.
What StoryDesk does when it finds a problem
Abrupt refusal with no alternative.
Unexplained rejection.
Lecturing or moralising.
Leaving the user without a path forward.
A governed alternative — same intent, different construction.
A brief, direct explanation of what was redirected and why.
An elevated output that serves the user's legitimate goal.
StoryDesk's voice throughout — warm, direct, never condescending.
6. What Happens If You Break These Rules
If you violate Sections 2 or 4, your account may be suspended immediately — without needing to go through a content quality dispute first.
Ethics and output quality are assessed separately. A post that meets quality standards can still be an ethical violation (e.g., a technically well-written post that targets a protected group).
StoryDesk will cooperate with legal processes in cases involving proven harm.
If your account is suspended for an ethical violation, a quality compliance appeal will not reinstate it.