# iView AI Security Policy ## 1. Introduction At iView Learning, we are committed to the secure, ethical, and responsible development and deployment of Artificial Intelligence (AI) systems. This policy outlines the standards and procedures that apply to anyone developing AI systems for our products, as well as our internal use of AI tools. ## 2. Scope This policy applies to: - All employees, contractors, and third-party vendors. - Development of proprietary AI models. - Integration of third-party AI services. - Internal use of AI tools that process customer data. ## 3. Data Privacy & Protection - **No Training on User Data**: iView Learning strictly prohibits the use of customer data (including course content, notes, and personal information) for training external AI models, unless explicitly authorised by the data owner. - **Encryption**: All data processed by AI systems must be encrypted in transit (TLS 1.2+) and at rest (AES-256). - **Data Minimization**: Only the minimum amount of data necessary for the specific AI task shall be processed. ## 4. Vendor Security - All third-party AI vendors must undergo a rigorous security assessment. - We require Data Processing Agreements (DPAs) with all AI vendors to ensure they adhere to our privacy standards. - Vendors are prohibited from using iView customer data to train their foundation models. ## 5. Ethical AI Use - AI systems must designed to support learning, not replace it. - We implement guardrails to prevent the generation of harmful, biased, or inappropriate content. - Human-in-the-loop mechanisms are maintained for critical decision-making processes. ## 6. Access Control - Access to AI development environments and customer data sets is restricted to authorised personnel on a least-privilege basis. - Multi-Factor Authentication (MFA) is required for all access to sensitive systems. ## 7. Compliance This policy is reviewed annually to ensure alignment with evolving laws (e.g., GDPR, EU AI Act) and industry best practices. *Last Updated: February 2026*