RightShip’s Approach to Responsible AI Use 

Our Commitment to Responsible AI 
At RightShip, we are dedicated to developing and utilising Artificial Intelligence (AI) in a responsible and ethical manner. Our AI solutions are designed to enhance productivity and improve services while embedding our core values—transparency, safety, accountability, and fairness—into every aspect of our AI initiatives. We aim to make meaningful improvements with AI while managing associated risks responsibly. 

Scope of Our AI Policy 
Our AI policy applies to all AI systems developed or deployed by RightShip. This includes every business unit, subsidiary, employee, and contractor. It governs all AI activities, from research and development to procurement, ensuring consistent and responsible AI usage across the organisation. 

Objectives of Our Responsible AI Practices 

  • Enhancing Services: We leverage AI to boost productivity and enhance customer experiences by delivering more effective and efficient services. 
  • Risk Mitigation: We recognise the unique risks AI can introduce, such as model bias and ethical concerns. We proactively mitigate these risks while upholding RightShip’s corporate values. 
  • Regulatory Compliance: Our AI systems comply with applicable regulations, data privacy laws, and internal policies. 

Guiding Principles for AI Development and Use 
Our AI development is guided by principles that ensure alignment with our corporate values and support responsible use: 

  • Accountability: Human oversight is at the core of all AI systems. Every AI project has a designated accountable owner to ensure appropriate control and oversight. 
  • Sustainability: We consider the environmental and societal impact of AI, striving for solutions that are sustainable in the long term. 
  • Safety: Minimising unintended consequences is crucial. AI applications are designed to prioritise the safety of our customers, partners, and broader community. 
  • Transparency & Explainability: Users should always understand when they are interacting with AI and how AI decisions are made. We prioritise explainable AI to maintain trust. 
  • Robustness & Security: AI systems are developed to be robust against potential errors and resilient against malicious attacks. 
  • Fairness & Equity: We aim to eliminate biases and ensure that AI solutions are fair, providing equal value to all users. 
  • Privacy: Compliance with privacy and data protection laws is non-negotiable. AI is designed to protect individual privacy. 

AI Practices We Avoid 
RightShip will not engage in AI activities that contravene our values or legal guidelines. Specifically, we avoid: 

  • Behavioural Manipulation: Using AI to influence individuals' behaviours in harmful ways. 
  • Biometric Categorisation: Classifying people based on sensitive characteristics without explicit consent or legal basis. 
  • Social Scoring: We do not evaluate or classify individuals based on social scoring techniques that could result in unjust discrimination. 
  • Unauthorised Facial Recognition: We prohibit the use of AI for indiscriminate facial recognition or emotion analysis unless explicitly for safety purposes. 

AI Governance Framework 

  • Corporate Oversight: The CTO, CEO, and the Board of Directors ensure that AI initiatives align with RightShip’s values and manage risk appropriately. 
  • Project Governance: Every AI initiative follows a thorough process involving problem validation, solution development, and performance monitoring, ensuring responsible AI deployment at every step. 
  • Risk Management: Each project undergoes rigorous risk assessments to identify and mitigate potential issues, involving stakeholders at every level to ensure full transparency and responsibility. 

AI Procurement 
When RightShip procures AI solutions from third parties, we ensure compliance with our internal AI policies through contractual agreements, maintaining the standards we set for our internally developed AI systems. 

Continuous Review and Improvement 
Our Responsible AI policy is reviewed annually to ensure that it remains relevant in an evolving technological and regulatory landscape. Any exceptions to this policy require written approval from our Chief Technology Officer (CTO).