Policy for Responsible and Ethical Generative AI best practices
(ver. 01 of 15/01/2025)
1. Introduction
This document outlines our commitment to the responsible and ethical use of Generative AI in alignment with best practices. Our approach prioritizes accuracy, safety, transparency, user empowerment, and sustainability while addressing potential risks and biases in AI-generated content.
These guidelines are inspired on the document "Guidelines on the responsible use of generative AI in research" published by the European Commission together with the European Research Area countries and stakeholders.
2. Ensuring ethical use of Generative AI
At Omnys, we have developed and implemented comprehensive AI ethics policies to guide the development and deployment of GenAI systems. These policies include:
- Guarding against Bias in Data: Generative AI models, trained on extensive datasets, may inadvertently perpetuate biases in the training data. Proactive measures, including careful data selection, augmentation, and balancing, are crucial to mitigate biases. Diverse and representative datasets contribute to more equitable AI outcomes.
- Maintaining Transparency: Transparency plays a pivotal role in managing risks associated with generative AI. Clear communication regarding AI-generated content is essential to prevent confusion and deception. Disclosing the origin of AI-generated content when shared publicly fosters trust and upholds ethical standards.
- Securing Sensitive Information: Gen AI models trained on sensitive data can present privacy concerns. It is crucial to anonymize or eliminate sensitive information during training. To ensure the protection of this data, robust security measures like encryption, access control, and regular audits should be implemented. These measures will effectively safeguard against unauthorized access and potential breaches.
- Ensuring Accountability and Explainability of AI Systems: With the continuous progress of AI technology, it is imperative to prioritize accountability and transparency to uphold ethical utilization. Documentation of the development and deployment process establishes a transparent chain of responsibility. Explainability in AI decision-making enables users to understand the factors influencing AI-generated outputs.
- Continuous Monitoring and Iteration: AI models require constant monitoring to identify and address risks. Regular evaluations, user feedback mechanisms, and ethical considerations help the systems iteratively adapt and refine. Establishing feedback loops empowers users to report problematic outputs or biases.
- Educating and Empowering Users about Generative AI: Effective risk management involves educating and empowering users to navigate potential pitfalls. Providing guidelines and best practices encourages users to assess AI-generated content critically. User education initiatives should raise awareness about limitations, biases, and risks associated with generative AI.
- Addressing Legal and Ethical Concerns: Generative AI raises important legal and ethical questions regarding intellectual property ownership and data. Adhering to relevant legal frameworks, obtaining necessary rights and permissions, and ensuring transparency in data processing are crucial to avoiding legal repercussions and ethical dilemmas.
- Collaborating with Experts and Stakeholders: Organizations should collaborate with experts and engage stakeholders to manage generative AI risks effectively. Involving ethicists, legal professionals, data scientists, and domain experts brings diverse perspectives to identify and address potential risks and ethical considerations.
- Considerations for AI-Generated Content: Clear content guidelines and review processes ensure that AI-generated content aligns with organizational values, brand image, and legal requirements. Human oversight and editorial control are essential for maintaining the accuracy, quality, and relevance of AI-generated content.
- Regular Updates and Maintenance: Generative AI technologies evolve, introducing new risks. Regular updates and maintenance are essential to incorporate improvements, address emerging threats, and ensure compliance with changing ethical standards.
3. Best Practices for ethical deployment of Generative AI
Our best practices for the ethical and responsible use of generative AI can be summarized as follows:
- Fairness and Non-discrimination: We are committed to ensuring that our developed GenAI systems does not perpetuate or amplify biases based on race, gender, age, or other protected characteristics, with strict manual evaluation.
- Transparency and Explainability: We strive to make GenAI decision-making processes as transparent as possible, providing explanations for its outputs when feasible.
- Privacy and Data Protection: We adhere to strict data protection standards, ensuring that user data is collected, processed, and stored securely and ethically with appropriate encryptions.
- Accountability: We have established clear guidelines of responsibility for the development, deployment, and monitoring of the GenAI systems. We also evaluate and validate the agreement with the guidelines.
- Human Oversight: We maintain human oversight in critical decision-making processes, ensuring that AI assists rather than replaces human judgment in sensitive areas (e.g: databases, critical data, critical operativity).
4. References
“Guidelines on the responsible use of generative AI in research developed by the European Research Area Forum”
https://research-and-innovation.ec.europa.eu/news/all-research-and-innovation-news/guidelines-responsible-use-generative-ai-research-developed-european-research-area-forum-2024-03-20_en