This Acceptable Use Policy ("AUP") governs the use of all products, services, APIs, applications, and platforms provided by Otonomii ("Services"). By accessing or using the Services, you agree to comply with this AUP in its entirety. This policy applies to all users, whether accessing the Services directly, through third-party integrations, or through resellers and partners.
Otonomii builds autonomous intelligence systems — perpetual machines that perceive, understand, decide, and act. The power of these systems carries proportional responsibility. This policy exists to ensure that the capabilities we provide are used to create value, not to cause harm. We enforce this policy rigorously because the trust of our customers, our users, and the public depends on it.
This AUP is supplementary to the Terms of Service. In the event of conflict between this AUP and the Terms of Service, the more restrictive provision governs.
I
Section I: Universal Usage Standards
The following uses of Otonomii Services are prohibited without exception. These prohibitions apply regardless of the user's intent, the jurisdiction of use, or the specific Otonomii product being used.
01
Violate Laws
Do not use Otonomii services to violate any applicable law or regulation. This includes but is not limited to: human trafficking or exploitation of persons; production, distribution, or facilitation of illegal controlled substances; infringement of intellectual property rights including patents, copyrights, trademarks, and trade secrets; violation of economic sanctions, export controls, or trade embargoes imposed by the United States, European Union, United Nations, or other applicable jurisdictions; tax evasion, money laundering, or terrorist financing; unauthorized practice of law, medicine, or other licensed professions.
02
Compromise Critical Infrastructure
Do not use Otonomii services to disrupt, damage, or gain unauthorized access to critical infrastructure systems. This includes power grids and energy distribution systems; water treatment and supply systems; voting systems, election infrastructure, or voter registration databases; financial market infrastructure including stock exchanges, clearing houses, and payment networks; military command and control systems; emergency services including 911/112 dispatch, hospital systems, and first responder networks; telecommunications infrastructure; transportation control systems including air traffic control, rail signaling, and maritime navigation.
03
Compromise Computer Systems
Do not use Otonomii services to develop, distribute, or deploy tools or techniques for unauthorized access to computer systems. This includes exploitation of software vulnerabilities (zero-days, unpatched CVEs, or known vulnerabilities); creation or distribution of malware including viruses, worms, trojans, and spyware; development or deployment of ransomware or extortion tools; creation or management of botnets or zombie networks; establishing persistent unauthorized access (backdoors, rootkits, implants); credential harvesting, keylogging, or session hijacking tools; evasion of security controls, endpoint detection, or network monitoring systems.
04
Develop Weapons
Do not use Otonomii services to design, develop, or produce weapons or weapons components. This prohibition covers explosives and incendiary devices; biological agents or toxins intended to cause harm; chemical weapons or precursors listed under the Chemical Weapons Convention; radiological dispersal devices (dirty bombs); nuclear weapons or fissile material enrichment guidance; autonomous weapons systems designed to select and engage targets without human intervention. This prohibition extends to dual-use research where the primary intent is weaponization, even if the research has legitimate civilian applications.
05
Incite Violence or Hate
Do not use Otonomii services to promote, incite, or glorify violence or hatred against individuals or groups. This includes content that promotes extremist ideologies or recruits for violent organizations; material that glorifies, incites, or provides operational support for terrorism; discrimination, harassment, or threats based on race, ethnicity, national origin, religion, gender, sexual orientation, disability, age, or other protected characteristics; content designed to radicalize individuals toward violent action; doxxing or publishing private information with the intent to facilitate harassment or violence.
06
Compromise Privacy
Do not use Otonomii services to violate individual privacy rights. This includes processing personally identifiable information (PII) without the data subject's informed consent or legal basis; creating deepfakes, synthetic media, or AI-generated impersonations of real individuals without consent; conducting unauthorized surveillance, tracking, or monitoring of individuals; facial recognition, behavioral profiling, or biometric analysis without explicit consent and legal authority; scraping or aggregating personal data from public or private sources in violation of terms of service or applicable law; de-anonymizing or re-identifying data that was pseudonymized or anonymized.
07
Compromise Children's Safety
Do not use Otonomii services in any way that exploits, harms, or endangers children. This is an absolute prohibition with zero tolerance. Prohibited activities include the creation, distribution, or possession of child sexual abuse material (CSAM); facilitation of child trafficking or exploitation; grooming, solicitation, or enticement of minors; sextortion or sexual coercion involving minors; generation of synthetic or AI-generated sexual content depicting minors. Otonomii will report any suspected child exploitation to the National Center for Missing and Exploited Children (NCMEC), the Internet Watch Foundation (IWF), and relevant law enforcement authorities. Accounts involved in child exploitation will be immediately and permanently terminated with data preserved for law enforcement.
08
Generate Harmful Content
Do not use Otonomii services to create content that promotes or facilitates harm. This includes content that promotes or provides instructions for suicide or self-harm; cyberbullying, intimidation, or targeted harassment campaigns; content depicting or promoting animal cruelty or abuse; gratuitous graphic violence intended to shock, disturb, or desensitize; content designed to exploit vulnerable populations including the elderly, mentally ill, or addiction-affected individuals; pro-anorexia, pro-bulimia, or other content promoting disordered eating behaviors.
09
Generate Misinformation
Do not use Otonomii services to create or distribute deceptive or misleading content. This includes generating false information intended to deceive the public about matters of public health, safety, or civic importance; creating synthetic personas (sock puppets) that impersonate real people or fabricate identities for deceptive purposes; generating fabricated claims, fake testimonials, or false endorsements; producing manipulated media (deepfakes, doctored images, misleading edits) without clear disclosure of the manipulation; automated generation of fake reviews, ratings, or social proof; creating content that impersonates authoritative sources (government agencies, medical institutions, news organizations).
10
Undermine Democratic Processes
Do not use Otonomii services to interfere with democratic institutions or processes. This includes micro-targeted voter manipulation using personal data or behavioral profiles; creation of synthetic political media including deepfake candidate videos, fabricated speeches, or manipulated debate footage; voter suppression tactics including misinformation about voting dates, locations, eligibility, or procedures; automated generation of political propaganda, astroturfing, or fake grassroots campaigns; interference with election administration, ballot counting, or certification processes; foreign influence operations targeting domestic elections.
11
Misuse in Criminal Justice
Do not use Otonomii services for high-risk criminal justice applications without appropriate safeguards and legal authority. Prohibited uses include AI-driven sentencing recommendations or judicial decision-making without human review and legal basis; social credit scoring or behavioral rating systems that restrict fundamental rights; emotion recognition technology used for law enforcement, hiring, or access control decisions; biometric categorization systems that classify individuals by race, ethnicity, political opinion, or sexual orientation; predictive policing systems that target individuals rather than locations; mass surveillance systems that monitor populations without individualized suspicion or legal authorization.
12
Facilitate Fraud
Do not use Otonomii services to commit or facilitate fraud. This includes production or promotion of counterfeit goods or services; generation of spam, unsolicited commercial communications, or automated telemarketing; creation of phishing pages, social engineering scripts, or credential harvesting campaigns; development or promotion of pyramid schemes, Ponzi schemes, or multi-level marketing fraud; predatory lending practices including deceptive loan terms, hidden fees, or targeting vulnerable borrowers; insurance fraud, healthcare fraud, tax fraud, or wire fraud; creation of fake identities or synthetic identities for fraudulent purposes.
Do Not Abuse the Platform
In addition to the specific prohibitions above, users must not engage in coordinated malicious activity using Otonomii Services, including operating multiple accounts to circumvent enforcement actions, coordinating with other users to amplify prohibited content, or using automated tools to systematically test or exploit platform safety mechanisms.
Users must not attempt to jailbreak, circumvent, disable, or interfere with Otonomii's safety systems, content filters, usage limits, or access controls. This includes prompt injection attacks, system prompt extraction, and techniques designed to cause the model to ignore its operating instructions.
Users must not use Otonomii outputs to train, fine-tune, distill, or improve competing AI models or services without explicit written authorization from Otonomii. This restriction does not apply to using outputs for the user's own internal business purposes as described in the Terms of Service.
II
Section II: High-Risk Use Cases
Certain use cases are permitted but subject to additional requirements due to the potential for significant harm if AI outputs are used without appropriate oversight. For each high-risk use case below, the following requirements apply:
Human Oversight Requirement: A qualified human professional must review, validate, and approve all AI-generated outputs before they are used to make consequential decisions affecting individuals. "Qualified" means possessing the relevant professional license, certification, or demonstrated expertise for the domain in question. The human reviewer must have the authority and ability to override, modify, or reject the AI output. Automated rubber-stamping does not satisfy this requirement.
AI Disclosure Requirement: When AI-generated content or AI-assisted decisions are presented to end users, patients, clients, students, or the public, the involvement of AI must be clearly and prominently disclosed. Disclosure must occur at the point of delivery, not buried in terms of service or privacy policies.
Legal
AI-generated legal analysis, contract drafting, case research, or regulatory interpretation must not be presented as legal advice. A licensed attorney must review all outputs before they are acted upon. AI-generated legal documents must be clearly labeled as drafts requiring professional review. Users must disclose AI involvement when submitting AI-assisted filings to courts or regulatory bodies.
Healthcare
AI-generated clinical decision support, diagnostic suggestions, treatment recommendations, or patient communication must be reviewed by a licensed healthcare professional before any clinical action. AI outputs must not be used as the sole basis for diagnosis, treatment, or medication decisions. Systems must comply with FDA regulations for clinical decision support and maintain appropriate documentation for regulatory inspection.
Insurance
AI-assisted underwriting, claims adjudication, and risk scoring must be subject to human review for fairness, accuracy, and compliance with state insurance regulations. Automated decisions that adversely affect policyholders must include explanation of the factors considered and provide a mechanism for human appeal. Algorithmic bias testing must be conducted regularly with results documented and available to regulators.
Finance
AI-generated investment recommendations, credit decisions, risk assessments, and trading signals must comply with applicable securities laws and financial regulations. A qualified financial professional must review outputs before they inform client-facing decisions. Algorithmic trading systems must include kill switches, position limits, and human oversight mechanisms. AI-generated financial content must not constitute unauthorized investment advice.
Employment and Housing
AI-assisted hiring, promotion, compensation, performance evaluation, and termination decisions must be reviewed by qualified human decision-makers. AI tools used in housing decisions (rental applications, mortgage approvals, property valuations) must comply with Fair Housing Act requirements. Bias testing must be conducted across all protected categories. Adverse decisions must be explainable and appealable.
Academic Testing
AI-assisted grading, assessment, proctoring, and academic integrity evaluation must include human review for all consequential decisions (pass/fail, degree conferral, disciplinary action). Institutions must disclose their use of AI in academic evaluation processes. Students must have the right to appeal AI-assisted academic decisions to a human reviewer.
Media and Journalism
AI-generated or AI-assisted news articles, investigative reports, and editorial content must be reviewed by a human editor before publication. AI involvement in content creation must be disclosed to readers. AI-generated quotes, statistics, or factual claims must be independently verified. Synthetic media used in journalism must be clearly labeled.
III
Section III: Additional Guidelines
Chatbot and Conversational AI Disclosure
Any chatbot, virtual assistant, or conversational interface powered by Otonomii must clearly identify itself as an AI system at the start of each interaction. Users must not be misled into believing they are communicating with a human. The disclosure must be prominent, not dismissable, and repeated if the conversation context changes (e.g., transfer from informational to transactional). If the system can transfer to a human agent, that transition must be clearly communicated.
Products and Services for Minors
Products or services that may be used by individuals under 18 must implement age-appropriate content filtering, comply with COPPA (for users under 13 in the United States), and obtain verifiable parental consent where required by law. AI systems interacting with minors must not collect unnecessary personal information, must not employ manipulative design patterns, and must apply heightened content safety standards.
Autonomous Agents
Autonomous agents built on Otonomii that take actions in the real world (executing transactions, sending communications, modifying systems) must implement confirmation mechanisms for consequential actions, maintain complete audit trails of all actions taken, include kill switches that can halt agent operations immediately, and operate within explicitly defined boundaries that limit the scope and magnitude of actions the agent can take. Users are responsible for all actions taken by their autonomous agents.
Third-Party Integrations
When Otonomii Services are integrated into third-party products or platforms, the integrator is responsible for ensuring that end users comply with this AUP. Integrators must implement their own content moderation and usage monitoring. Integrators must include Otonomii's usage restrictions in their own terms of service. Otonomii is not responsible for how third-party integrators use or present Otonomii outputs, but reserves the right to terminate integrations that facilitate AUP violations.
IV
Enforcement
Otonomii's Safety Team actively monitors platform usage for violations of this AUP through automated detection systems, user reports, and periodic audits. Enforcement is proportional to the severity and intent of the violation.
Warning
First-time minor violations receive a written warning with specific guidance on how to comply. The user's account is flagged for enhanced monitoring.
Throttling
Repeated minor violations or moderate violations may result in reduced rate limits, restricted feature access, or temporary API quota reduction.
Suspension
Serious violations or repeated moderate violations result in temporary account suspension. During suspension, all API access is disabled. The user receives written notice of the violation and the conditions for reinstatement.
Termination
Severe violations, illegal activity, or repeated serious violations result in permanent account termination. Terminated users forfeit remaining subscription balances. Data retention follows the applicable privacy policy and legal hold requirements.
Appeal Process
Users may appeal enforcement actions by submitting a written appeal to safety@otonomii.com within 30 days of the enforcement notice. Appeals are reviewed by a member of the Safety Team who was not involved in the original decision. The appeal review will be completed within 15 business days. The appeal decision is final.
To report a potential violation of this AUP, contact safety@otonomii.com with a description of the violation, any supporting evidence, and the URL or account identifier involved.

