Artificial intelligence (AI)

Artificial intelligence (AI) offers enormous opportunities for innovation and development and will have a profound impact on society. Through advanced algorithms and machine learning, AI can offer solutions to many of society’s challenges. Language modelling and other generative AI are now being used in most industries to create new and relevant content that mimics human creativity. However, the use of AI also raises legal issues and challenges.

The rapid development of AI raises both ethical and legal issues around the use of AI, for example related to the handling of data, including personal data, and the use of intellectual property and trade secrets – both when training and using AI – and liability issues when AI causes harm. Against this background, the AI Act has been adopted by the EU to establish a framework for responsible use of AI.

Cederquist has extensive experience in working with business and legal issues related to digital developments and is updated on the latest advances and legal developments within AI. With in-depth expertise in IT law, data protection, intellectual property and compliance, we are experienced in supporting and guiding clients through complex legal issues related to AI.

Among other things, we can assist with:

  • Advice on the AI Act and the measures required to comply with it
  • Drawing up agreements such as cooperation agreements, development agreements and licence agreements/SaaS agreements related to AI
  • Security and privacy issues (GDPR) related to AI, e.g. regarding automated decision-making
  • Intellectual property aspects of AI and AI-generated content, including the rights of copyright holders in relation to the use of their material when training AI
  • Drawing up of policies and other internal governance documents for the development and use of AI
  • Risk assessments for different use cases of AI
  • Questions related to autonomous vehicles and products, including product liability questions
  • Questions related to AI and Life Science, and AI and its role in labour, finance and the public sector
  • Strategic advice in areas relevant to projects and collaborations related to AI

Briefly about the AI Act

The AI Act will be directly applicable in all EU Member States, without the need to adopt national laws. It applies to various actors, such as providers and deployers of AI systems, with obligations concerning the development, provision and use of AI systems, with the aim to ensure the protection of individuals’ fundamental rights.

The Act is structured on a risk-based approach, where the requirements imposed on an AI system are adapted to the risks that the use of a system may entail. A distinction is made between (a) prohibited AI systems, which may not be used on the EU market, (b) high-risk AI systems, which can be placed on the EU market subject to compliance with the applicable requirements, and (c) AI systems that, because of their intended use, are subject to certain transparency requirements. There are also specific rules for general purpose AI models (“foundation models”), such as AI models like GPT.

Depending on the role of a company in relation to a particular AI system, the company will be subject to different requirements under the AI Act. Some requirements are imposed on providers of AI systems (i.e., companies that develop and place the AI system on the market), while other requirements are imposed on deployers of AI systems (i.e., companies that use the AI system in their business). In addition, certain requirements are imposed on, among others, distributors and importers of AI systems.

If a company does not comply with the rules of the AI Act, it risks having penalties imposed on it that can amount to the higher of €35 million or seven per cent (7%) of its global annual turnover.

On 21 May 2024, the Council of the EU approved the AI Act and the next step is publication in the Official Journal of the EU, after which the rules will enter into force 20 days later. Already six months after entry into force, some rules will start to apply, while the majority of rules will start to apply after 24 months.

Already today, we recommend that preparatory steps are taken to be able to comply with the AI Act, by making an inventory of the AI systems and models developed and/or in use within the organisation, and thereafter:

  • Raise awareness of AI in the organisation and ensure that there are system owners for the AI systems.
  • Classify the AI systems and models in use and/or developed based on the categorisations in the AI Act.
  • Analyse which type of actor the organisation is in the value chain relating to the AI systems used and/or developed, e.g. provider or deployer, and which potential requirements of the AI Act that will apply to the organisation.
  • Create internal governance documents and processes for the development and/or use of AI aimed at enabling the fulfilment of the requirements that will be imposed under the AI Act.

Cederquist’s own use of AI

Cederquist has long prioritised being at the forefront of digital development in the legal industry and we implemented our first AI tool as already in 2017. Today, we see digital tools as important building blocks for being able to streamline operations where appropriate. In recent years, we have continued to implement and develop new solutions that create value for our employees and our clients. We regularly organise workshops with both customers and our own employees to discuss the opportunities and challenges of AI and other digital tools. We have also put in place an AI policy to provide a structure around how the different tools are used and ensure they meet appropriate ethical, privacy and compliance requirements.

Read more here: Innovation & Digital Services – Cederquist.

Please contact us if you have any questions about Cederquist’s use of AI.

Would you like our specialists to contact you?

bg