How will AI affect the pharmaceutical industry

Data protection-compliant structure of artificial intelligence - not a contradiction ?!

No other technology should stir up so many hopes and fears in society at the same time: We are talking about artificial intelligence (AI). From a business perspective, artificial intelligence offers enormous advantages. For example, processes in the IT area, in sales, in the context of customer service and in the manufacture of products can be optimized or made more economical. Such process optimization through AI offers above all the potential to save costs, to create capacities in order to generate more sales.

In addition to the numerous advantages, the use of AI always harbors risks for the rights of the persons affected by automated processing. This is where the strict handling of automated processing by the General Data Protection Regulation (GDPR) comes into play.

We support companies in the introduction and use of AI with our extensive know-how in this rapidly developing market.

We advise:

  • E-commerce company
  • FinTech company
  • Providers of customer loyalty systems
  • Gaming provider
  • mobility
  • Adtech
  • Pharma & Health
  • Finance

Which legal and technical challenges arise for our clients and how can we solve them successfully?

In this case, we show the steps that we believe our clients should take when implementing and using AI and how we can support them with legal issues, in particular to keep compliance costs low.

The key question: What should be achieved with the use of AI and how can this use be reconciled with the rights of the persons concerned?

What should AI achieve, where and how is it used? Once these questions have been clarified, we discuss the steps towards the legally compliant use of AI. Before we discuss the specific implementation with our client, it should be determined what should be achieved with the use of AI.

For example, AIs are used for:

  • Profiling, scoring
  • face recognition
  • Chatbots, digital assistants
  • Autonomous driving

Depending on the area of ​​application, there are also different legal challenges.

Algorithms, big data, machine learning - the technical implementation

The linchpin of every AI is the algorithms on which it is based. There are relatively rigid algorithms that represent nothing other than fixed rules of action for solving a problem. In addition, there are also algorithms that allow the AI ​​to learn and develop the original algorithm independently. Here we enter the terrain of machine learning. From a data protection point of view, the deep learning systems fed with big data are particularly relevant here. These systems learn autonomously and, as the self-learning process progresses, become more and more intransparent or no longer (completely) comprehensible for those responsible - one speaks of a so-called "black box". Here, the AI ​​developer must always consider Art. 22 GDPR, which basically grants the right to persons whose data are processed not to be subjected to an automated decision.

On the other hand, there is in particular the data protection right to information of Art. 15 GDPR. In principle, this requires the person responsible to provide the data subject with comprehensive information in clear and understandable language about processing purposes and processed data, but also and above all about the logic involved and the scope and intended effects of such processing for the data subject . In principle, however, such information can only be provided if the data processing is understandable for the person responsible. This conflict is exacerbated by the interest of companies in not wanting to disclose trade secrets. After all, specially developed algorithms can mean a competitive advantage. The GDPR states in its recital 63 sentence 5 that the right to information should not affect the rights and freedoms of other people - including business secrets - should not affect this.

Timely pseudonymization - minimize compliance effort!

The effort resulting from the requirements of the GDPR from a compliance point of view is directly related to the risk arising from processing. Measures that reduce risk are regularly rewarded by the GDPR. A pseudonymization of personal data results, for example, in a more favorable balancing of interests within the meaning of Art. 6 Para. 1 lit. f GDPR, in further processing that is more compatible with the original purpose of processing and an easier passing of a data protection impact assessment (DSFA). Last but not least, the person responsible can, if necessary, use the exemption under Art. 11 Para. 2 GDPR for himself. Against the background of the comprehensive rights of those affected, this is definitely a desirable path.

At best, however, all relevant personal data should be anonymized in order to leave the scope of the GDPR.

A pseudo or anonymization of personal data should also take place in the storage environment of the raw data and thus before transfer to the machine learning environment.

We help with the correct implementation of anonymization and pseudonymization measures in accordance with data protection law and the implementation of a data protection impact assessment.

Use data protection impact assessment for yourself

If the scope of application of the GDPR cannot be exited, experience has shown that a data protection impact assessment is required in the area of ​​artificial intelligence. Data protection impact assessments offer the great advantage that data protection aspects can already be taken into account during the planning phase of a machine learning project. In this way, the person responsible can implement the requirements of Art. 25 GDPR, namely data protection through technology design ("Privacy by Design") and through data protection-friendly default settings ("Privacy by Default") more effectively.

Here our clients benefit from our experience, which can be incorporated into the conception of an AI application.

Documentation through technical and operational monitoring

The principles of the GDPR include comprehensive documentation and accountability requirements. The fulfillment of these obligations requires a certain understanding of the algorithm used. Weighting of the criteria according to which the AI ​​learns and makes decisions must be documented as well as the effects of various correlations on the results.

It is therefore necessary that changes in the weightings that result from the self-learning process of the AI ​​can be recognized (technical monitoring). In our opinion, the respective company can and should view this duty as an opportunity and use it for itself in order to maintain control over decisions of operational importance.

Another way of better understanding and documenting decision-making processes is what is known as "black box tinkering" (operational monitoring). Here you let the algorithm process raw data sets that have been changed in just one criterion and compare the output result with the results that are based on the original data sets. This type of monitoring enables conclusions to be drawn about the effects of certain criteria or combinations of several criteria, which enables those responsible to better understand and document the logic of the AI.

Other problem areas under data protection law:

  • Right to cancellation
  • Right to data portability

Our soft skills: Contract negotiations with service providers

Companies that want to work with artificial intelligence for the first time like to fall back on service providers who provide the necessary technologies. There is a risk here that the companies in question will become heavily dependent on the service providers and thus suffer disadvantages for the future. We also advise our clients on the selection of service providers and carry out the sometimes difficult but necessary contract negotiations.

Special problem: disclosure of the algorithm

A particular challenge is the conflict between the obligation to provide information about the “logic involved” and the avoidance of impairments for trade secrets. Generally speaking, algorithms are intellectual property that is worthy of protection. The formulation of recital 63 sentence 5 GDPR (“should not affect”) makes it clear that it is not permissible to refuse any information from the outset with a blanket reference to business secrets. Rather, the interests of the person responsible in secrecy must be weighed against the interests of the data subject to provide information. Whether such a weighing was carried out in a permissible manner can only be finally clarified by court decisions.

If the balance turns out to be in favor of the person concerned, the problem of the lack of traceability of AI applications that independently write new algorithms still exists. A problem that the authors of the GDPR apparently did not consider.

Ultimately, it should be sufficient and expedient if the person concerned is explained in simple and clear language how the technology around the algorithm and its decision-making work - think of the technical and operational monitoring described above.

Recommendations for action

  1. Consideration of legal aspects when designing an AI
  2. Machine learning development environment without personal data
  3. Regular monitoring and evaluation of the decisions
  4. Sustainable data usage and data protection concept

Do you have any questions about the data protection-compliant structure of AI? Our specialist lawyers will be happy to help you.