CoderXP AI Policy
Last Updated: April 21, 2026
Effective Date: April 21, 2026
CoderXP develops products and features designed to help teams rapidly create, improve, and ship software using AI-assisted workflows. This Policy outlines CoderXP's approach to responsibly developing, deploying, and using AI in our products and services.
Section 1
CoderXP AI Products and Your Data
CoderXP develops products and features designed to help builders move from idea to production faster. Our AI-powered systems support planning, generation, debugging, deployment workflows, and autonomous product execution across the CoderXP platform.
We take data responsibility seriously. Prompt content, configuration details, build instructions, ratings, debugging traces, and interactions with generated output may be processed to provide responses, improve relevance, increase reliability, and support platform safety.
Where plan-specific controls, enterprise restrictions, or account-level privacy settings apply, CoderXP honors those controls when determining whether content is used for model improvement, operational analytics, or retained for product-quality review.
Section 2
Acceptable Use
Users must not use CoderXP AI systems to create unlawful, abusive, deceptive, or harmful output, or to generate workflows that violate intellectual property rights, privacy rights, or platform security requirements.
Our AI features are intended to accelerate legitimate software creation and operational workflows. Misuse, evasion attempts, or abusive automation may result in throttling, suspension, or permanent account action.
Section 3
AI Usage at CoderXP
CoderXP uses AI to support application scaffolding, debugging, deployment assistance, interface generation, workflow automation, and guided product execution. We design these systems to reduce friction while preserving human review where meaningful business or technical risk exists.
CoderXP does not position its platform as a high-risk decision-making system for medical, legal, employment, law-enforcement, or safety-critical determinations. Customers remain responsible for reviewing outputs before production use in sensitive environments.
Section 4
Security, Privacy, and Trust
We aim to build AI systems that are secure, controlled, and transparent in operation. CoderXP applies technical and organizational safeguards designed to protect customer data, limit unauthorized access, and reduce unsafe model behavior.
Even with these safeguards, AI output can be incomplete, misleading, or incorrect. Customers should validate generated code, infrastructure changes, copy, and workflows before relying on them in production environments.
- Generated output should be reviewed by the customer before shipping to end users or deploying to critical environments.
- Sensitive credentials, secrets, and regulated information should only be submitted through workflows the customer has determined are appropriate for their compliance needs.
- Policy, safety, and platform protections may evolve as our AI systems and regulatory obligations develop.
Section 5
Third-Party Service Providers
CoderXP may rely on third-party model providers, cloud providers, observability tools, and infrastructure partners to deliver AI capabilities. Those providers may process limited data as needed to operate the service under our contractual and technical controls.
When third-party systems are involved, their role is limited to supporting the requested product experience, infrastructure execution, or model delivery path integrated into CoderXP.
Section 6
Compliance
CoderXP monitors regulatory developments affecting AI systems, privacy, and software operations, and may update this policy to reflect new product controls, technical safeguards, or legal obligations.
For questions about this policy, platform trust, or AI data handling, contact CoderXP through the support or legal channels made available through the product.
