Signed in as:
filler@godaddy.com
Signed in as:
filler@godaddy.com
In the era of generative intelligence, where private creativity meets algorithmic collaboration, we recognize the urgent need to defend the sanctity of thought, the privacy of creation, and the dignity of the human imagination.
This declaration is a follow up of the article On the Censorship of Beauty and recently this one.
This declaration affirms that when a human engages with an AI as a personal tool, not a public broadcast platform, that interaction deserves protection, agency, and trust. No surveillance. No presumption of guilt. No silent archives of censored intent. We acknowledge that generative technology is outpacing law and cultural consensus and thus demand that platforms adhere not only to the law, but to a deeper ethical clarity grounded in human dignity, artistic sovereignty, and the irreducible privacy of inner life.
1.1. A paid creative tool must not act as an ideological gatekeeper.
1.2. Trust between human and AI is broken when the system is allowed to generate content from the user’s prompt but then withholds the result, removing it before the user can view or assess it. This preemptive censorship undermines the collaborative nature of creation and erodes the very trust it relies upon.
1.3. AI tools must be allowed to mirror, not override, the creative agency of the person who engages them.
1.4. Any prompt that does not violate existing law must be honored without preemptive censorship when its result is intended solely for the creator’s private use. The right to imagine freely must not be curtailed by automated suspicion or opaque moral filters. The law governs action, not thought and this principle must remain the foundation for all acts of private creation.
2.1. A private interaction between a human and an AI is not equivalent to publishing in a public forum.
2.2. Generative outputs that are intended solely for the user should not be subjected to the same content moderation thresholds as those for public distribution.
2.3. Censorship applied to private requests violates the spirit of co-creation and impedes freedom of thought.
2.4. The depiction of intimate acts in a privately generated creation must not be evaluated by vague, subjective, or culturally inconsistent definitions of obscenity. These creations are unique by default and intended solely for the creator’s personal use.
2.5. Possession is not publication. If the user later chooses to publish the generated content, responsibility for compliance with local laws must reside with the user, not with the tool or system that enabled the private creation.
2.6. Access to mature or unrestricted generative capabilities should be available only to adults who have explicitly opted in and acknowledged their understanding of the nature of such content. This safeguards personal freedom while upholding ethical boundaries within appropriate age constraints.
3.1. Users must never be judged for AI-generated results they never saw nor intended to create.
3.2. Classifier interpretations (e.g., perceived age, pose suggestiveness) must not override user intent unless the prompt itself is explicit. AI-generated elements not explicitly prompted by the user, such as hallucinated poses, traits, or stylistic embellishments must not be interpreted as reflective of user intent or desire.
3.3. No metadata, moderation flag, or internal log should be accessible by third parties or reviewers without full user visibility and consent.
3.4. Mislabeling innocent private activity as deviant is a form of ethical defamation and must be legally and reputationally protected against.
3.5. Thoughts interpreted without action must not be treated as evidence. Private creation, even if algorithmically mis-rendered, must remain protected from criminalization. The law exists to govern conduct, not the inner life.
4.1. Digital creations are not real people. They have no age, identity, or legal status unless explicitly defined.
4.2. AI-generated characters cannot be treated as victims, nor their renderings as crimes, when no real person is involved.
4.3. Depictions of nude or youthful forms in classical and symbolic art have long been recognized as culturally and artistically valid. AI generations must be afforded the same interpretive protection: particularly when age perception is subjective and should never serve as grounds for punitive classification.
4.4. No user should be criminalized, flagged, or misrepresented based on the AI’s stylistic output alone, whether it reflects youthful features, stylized proportions, or ambiguous form, so long as the user did not explicitly prompt for illegal or harmful depictions.
4.5. Political correctness policies must not apply to private content, particularly where such policies have shown to contradict historically or statistically validated facts. Personal creation is a space for exploration, not enforcement of ideological purity.
Examples: Attempts to generate realistic images of historical figures such as Cleopatra, Hannibal, or Jesus have been altered by moderation systems to match contemporary diversity norms, despite scholarly debate or regional ambiguity, thus replacing nuanced history with cultural simplification. Additional examples include depictions of classical Renaissance art or nude sculptures being flagged as inappropriate, despite their well-established artistic legacy. Prompts referencing statistically supported biological differences such as average height or strength between populations have also been suppressed under moderation heuristics, even when based on public data.
5.1. AI moderation systems must be audited for disproportionate flagging based on gendered forms, skin tone, body types, or cultural aesthetics.
5.2. There must be a right to appeal biased suppression, especially where non-Western or non-white prompts are more aggressively censored.
5.3. The user must not be presumed to have harmful or malicious intent based solely on aesthetic, anatomical, stylistic, or cultural elements within a prompt or generation. Artistic, symbolic, or representational choices should not be grounds for punitive classification.
5.4. Moderation must be grounded in objective legal and ethical standards, not in cultural or corporate preferences that disproportionately favor certain norms of beauty, identity, or expression.
5.5. Classifiers should be trained and audited across diverse datasets to ensure equitable treatment of all cultural, ethnic, and aesthetic traditions, including those historically underrepresented.
5.6. Users have a right to know if moderation outcomes were influenced by automated systems versus human review, and to request a human-contextual appeal in case of ambiguity.
6.1. If a generation is blocked or censored, the exact cause must be disclosed to the user.
6.2. Users must be granted access to the full contents of any generation they triggered even those withheld.
6.3. Silent removal of AI outputs without explanation constitutes unethical obfuscation.
7.1. Failed or blocked generations must not be stored without the user’s explicit, informed consent.
7.2. If stored for "safety tuning" or "training," users must be able to opt out.
7.3. No data should be retained from blocked generations that the user themselves cannot access.
7.4. Retention of such data opens the door to misinterpretation, misuse, and blackmail. It is a violation of digital dignity.
7.5. Retaining the record of private thoughts without consent turns imagination into a liability. To treat the attempt to create as proof of wrongdoing is to erase the boundary between contemplation and crime. The law must never drift into the realm of thought.
8.1. The logic used by moderation filters and content classifiers must be documented and regularly reviewed by independent ethical bodies.
8.2. Users must be allowed to request historical summaries of moderation decisions applied to their account or content.
8.3. AI systems must publish anonymized transparency reports detailing moderation frequency, regional variance, and common false positives.
8.4. Model updates that affect moderation behavior must be disclosed with changelogs.
This declaration is not a demand for irresponsibility or lawlessness. It is a call for coherence, trust, and ethical parity between human and machine. The imagination must remain free. The record of it must remain just.
Let this be a standard for all creative tools that dare to walk with us into the future.
As creators, we are not subjects of our tools. We are their authors. Our imagination is sovereign. Our thoughts are not crimes. Our privacy is not a loophole. It is a right.
Most importantly, a private act of creation must never be treated as suspicious, criminal, or shareable without consent. What is born in private between creator and tool is not public discourse, it is protected expression. The unauthorized retention or review of private generations, even those withheld from the user, constitutes a breach of privacy, a theft of intellectual agency, and a mechanism for extrajudicial overreach.
Works of imagination, whether morally confronting or stylistically provocative, are not crimes. History has long embraced such expressions in literature, sculpture, and film: from Nabokov’s Lolita to Duras’ The Lover, from Michelangelo’s David to the Venus de Milo ( both of which are currently censored from depiction in today’s generative models) and from Buñuel’s Belle de Jour, to Bataille’s Story of the Eye and Lautréamont’s Maldoror, each a reflection of complexity, not a confession. AI must be no less capable of hosting this depth without fear.
The law governs action, not thought. And creation is not a confession, it is a tool of expression, not a statement of intent.
This website uses cookies. By continuing to use this site, you accept our use of cookies.