A Foundation of Trust for AI Development

    Mimecast's Pledge

    Mimecast’s mission is to transform the way organizations manage and mitigate risk with its AI-powered, API-enabled connected Human Risk Management platform. Leveraging Artificial Intelligence (AI) / Machine Learning (ML) and data science is critical to that mission. And for us, this also requires establishing a foundation of trust with our customers and partners with transparency about how their data may be processed.

    Principles
    • Privacy: We follow a privacy-by-design approach throughout AI/ML model development and deployment.
    • Fairness: We use methods and processes designed to reduce unintentional bias for individuals and organizations.
    • Transparency: We communicate with customers how AI/ML models are used in our services.
    • Interpretability: We use methods and reporting to make model results explainable and interpretable.
    • Safety: We use human-in-the-loop training & evaluation processes to ensure the services we offer do not pose risk to users.
    • Accountability: We ensure appropriate oversight of our AI/ML models.
    • Sustainability: We recognize the growing importance of sustainability and environmental impact in the development and deployment of AI technologies. We are committed to exploring and aligning our AI initiatives with sustainability goals as part of our ongoing framework.
    Customer Data Usage in AI/ML Model Development

    All AI/ML models begin with data. Mimecast takes the stewardship of customer data very seriously. For this reason, we adhere to our customer agreements as well as applicable regulations like the GDPR, CCPA, and the emerging EU AI Act. But we go well beyond the guidelines those frameworks require:

    • Data access is tightly controlled by role-based access following a “least privilege access” paradigm.
    • Geographic data residency requirements are followed throughout the development and deployment processes.
    • Although training and production data are never commingled, we have implemented the same technical and organizational measures designed to protect each data category.
    • Models are never trained on individual level characteristics, which is designed to minimize the risk of bias.
    • Data minimization is implemented by limiting the amount, granularity, and storage duration of information in training dataset.
    AI/ML Model Development Process

    Industry standard frameworks and practices for AI development, such as NIST’s AI RMF, are used to guide model development. Adoption of these practices, including those outlined below, allows Mimecast to proactively address potential sources of bias, make AI models more reliable and valid, and improve explainability, replicability, and transparency.

    • Advanced data sampling techniques, preventing overweighting of specific data sources, helping debias resultant models.
    • Limited use of non-domain data, increasing reliability and validity of outcomes.
    • Extensive human-in-the-loop data and model evaluation processes, improving trustworthiness.
    • Frequent updates to AI/ML models, helping reduce the chance of data-model drift to help maintain validity and reliability.
    • Model and data versioning, facilitating reproducibility and explainability.
    • Thoroughly and transparently documenting the AI/ML model development process, holding practitioners accountable for adhering to these processes.
    AI/ML Model Applications
    • Extensive monitoring for data drift, designed to reduce the risk of models being applied in the wrong circumstances.
    • Logging and traceability of model performance and results.
    • Adhering to Mimecast’s strict Software Development Lifecycle (SDLC) policies and procedures, including security approvals and reviews.
    • Performing routine audits using comparable, publicly available models to assure continued model performance relative to industry standards.
    Privacy
    • Establishing a Responsible AI Council that sets policy and assures adherence to strict guidelines across our services.
    • Evaluating and implementing privacy enhancing techniques, such as de-identification, pseudo-anonymization, and anonymization, as appropriate.
    • Prohibiting sharing of any customer data that may be used in training data with third parties.
    Haut de la page