What people ask us

Are you compliant with GDPR?

Yes.

GDPR applies for AI solutions developed and applied in the EU and EFTA (combined referred to as EEA) where personal information is being used - also when this is publicly available. Compliance in the context of AI here means two things:

1) Where personal information is used, the organization must have a lawful basis for its collection and processing, for instance through consent of the user, contractual necessity or legitimate interests. This means the AI solution must be built with the relevant lawful basis in mind if it is to use personal data, otherwise the data has to be anonymized before it's processed.

2) The data must be stored either in the EEA or in a country that the European Commission determines to have a "comparable level of protection of personal data" to the EU. Many AI solutions builds directly on publicly available LLMs like those of American companies like OpenAI and Anthropic, where the cloud storage location data usually is usually stored on US servers. OpenAI has an official Data Processing Addendum (DPA) publicly available (same with Anthropic), ensuring GDPR compliance even when the data is processed on US servers. Based on the use case, however, it may be necessary with assessments whether the intended usage of the data is adequately addressed in the DPA.

While not strictly required for GDPR, where extra security is needed beyond GDPR compliance, using Azure for the AI development allows flexibility for which server locations to use. Although a more expensive option, this enables us to choose servers either in the EU, EEA, US, or elsewhere, if needed for the use case and business requirements.

Our planning and preparation process for developing AI solutions involves a careful consideration of which information will be processed with the AI solution. This means taking clear measures to ensure GDPR compliance in the cases where personal information will be used.

Are you ISO 27001 certified?

No.

We follow the recommendations of the Norwegian Digitalization Directorate (DigDir). While an ISO 27001 certification has its benefits, DigDir notes that this is a costly process and not the only alternative for working systematically with governance and control of information security.

We have accordingly made a decision not to prioritize an ISO 27001 certification at this stage of our business. Instead our focus is to build on DigDir's "Internal Control in Practice - Information Security" and "Holistic Governance and Control of Information Security" to form our approach to ensure strong internal controls and information security.

We take security concerns by our clients and stakeholders seriously, and take the precautions required for security based on the business requirement in each project and solution. In cases where clients require ISO 27001 for their suppliers for certain projects, we can facilitate contact with our relevant partners who have these certifications.

Are you compliant with EU AI Act?

Yes.

To understand what compliance with the EU AI Act means, requires an understanding of the timeline of the Act as this is a fairly new piece of legislation:

  • EU AI Act entered into force August 2024

  • Ban of prohibited practices will apply from February 2025

  • Codes of practice should be ready to help providers demonstrate their compliance by May 2025

  • Obligations for providers of general-purpose AI models apply from August, 2025

  • Obligations for most high-risk AI systems apply from August 2026

Official codes of practice for compliance won't be available before May 2025. Accordingly, our current focus for compliance is first and foremost preparatory. For us, this means 4 key principles:

  1. We take seriously the role of understanding and assessing the risk levels associated with different use cases of AI solutions;

  2. We avoid taking on projects with development of solutions that involve an "unacceptable risk" as described by the European Commission or other authoritative institutions on the topic;

  3. We conduct thorough risk assessment and -management and extra strict requirements for development of solutions classified as "high risk", mitigating and containing these risks as much as possible; and

  4. We address "specific transparency risk" where necessary to inform users that they are interacting with a machine rather than a human.

We plan to consider additions or changes to our policy on compliance with the EU AI Act when the official codes of practice are released in May 2025.

Contact Us

Contact us for more information

vetle@aiautomatisering.net

+47 97536533

Book a Discovery Call