User Safety & Content Policy
Last updated: January 26, 2026
We balance access to useful information with the need to reduce online harms. Our commitments are to prevent obviously unlawful or high‑risk outputs, protect children, be transparent, and respect privacy.
1. Who we are and how Reona works
Reona is an all-in-one AI assistant powered by cutting-edge AI models. Reona enhances your chat, search, writing, image generation, video generation, and coding experiences by leveraging multiple third-party AI models and tools. We design our orchestration to apply safety controls before, during, and after model calls so that what you see adheres to our rules and the law.
2. Our safety commitments
We are committed to preventing obviously unlawful or high-risk outputs, detecting and acting quickly on problematic content, protecting children with heightened safeguards, being transparent about our processes, providing a clear reporting path, respecting privacy, and complying with AI ethics and regulations.
3. Roles and Responsibilities
**Reona Responsibilities:** We maintain layered safeguards to prevent, detect, and respond to illegal or harmful content. We continuously tune safeguards to reflect abuse signals and legal requirements.
**User Responsibilities:** You must be 18 or older and comply with the Usage Policy, this Policy, and applicable law. Do not attempt to bypass safeguards.
**Vendors and Tools:** We leverage third‑party models/tools and functionalities may be re‑routed, blocked, or degraded where risks are identified.
4. Illegal content
We prohibit content that violates applicable law, such as terrorism promotion, child sexual exploitation and abuse (CSEA), serious criminal facilitation, or illegal hate materials. We use proactive safeguards (filters, policy-aware routing) and aim to rapidly remove illegal content upon awareness. We may restrict accounts and cooperate with authorities when required.
5. Harmful content
We may refuse, redact, or transform content that is likely to cause harm, such as pornography, encouragement of self-harm or eating disorders, hate speech, realistic graphic violence, or content promoting body shaming. We apply remedies like refusal, safe completion (providing supportive resources), age-appropriate gating, or down-ranking/removal.
6. Product‑level safeguards
**Agent workflows:** Rate limits and human confirmations for sensitive steps.
**AI interaction notice:** Disclosure of AI interaction where not obvious.
**Deepfakes:** Clearly disclosed as such with labels/watermarks.
**Image/audio/video:** Rules against sexual content involving minors, graphic violence, and prohibited categories.
**Third‑party tools:** We prefer vendors with safety controls and configure them conservatively.
7. Proactive technology and human review
We use layered defenses including policy-aware orchestration, automated content classifiers, hash-matching, blocklists, anomaly detection, and human review for escalations and appeals.
8. Reporting concerns
If you encounter illegal or harmful content, use the in-product Report option or email support@reona.ai. Include context (what you asked, what you saw, timestamps). We aim to acknowledge receipt within 48 hours and complete assessment within 10 business days.
9. Appeals
If you think we made a mistake (e.g., refusal or removal), you may appeal. We will re-evaluate the context and explain our final decision.
10. Regional Safety Notes
**UK:** We apply this Policy to address illegal content recognized by the UK Online Safety Act.
**EU:** We comply with the EU Digital Services Act, including deepfake disclosures and notice-and-action processes.
**US:** Reona is offered only to adults; we block sign-ups from users under 18.
11. Effective date & updates
This policy takes effect on the date shown above and may be updated as features, vendors, or laws change. Material updates will be highlighted in‑product or in our help centre.
© 2026 Reona AI. All rights reserved.