Crime Detection with Palantir’s Artificial Intelligence by the Police in Germany

28 август 2025, 11:46 часа 777 прочитания 0 коментара

On 30 July 2025, German media reported that Federal Minister of the Interior Alexander Dobrindt (CSU) is examining the introduction and use by police forces in various federal states of the “Gotham” software, developed by the American company Palantir*, for crime detection. The “Gotham” software is used by police and intelligence services to link and analyse large volumes of data from different sources. This makes it possible, for example, to recognise faces and uncover connections between people and events. It is intended to support investigations and prevent crime.

Police in certain German federal states have been using Palantir’s software since 2017. According to its advocates, it is an essential digital tool in the fight against organised crime and terrorism. The first federal state to introduce Palantir’s software was Hesse in 2017. North Rhine-Westphalia followed suit in 2020. In 2022, Bavaria signed a framework agreement with the company. Other federal states and the federal government may join at any time without the need for a new public procurement procedure. Since 2024, Bavaria has also been using Palantir’s software in practice under the name “VeRA”** (an abbreviation of “Inter-procedural Research and Analysis Platform”). The police in Hesse use it under the name “HessenData”. In fact, the American company’s software appears to have overshadowed all other previously known data analysis platforms.

Following media reports about the use of Palantir’s software, critical voices emerged, warning of a threat to freedom and democracy due to concerns that the company could gain unlimited access to various data. Palantir immediately rejected these accusations. The transfer or leakage of data – for example to the United States – was technically impossible, a company spokesperson told DPA. In Bavaria, North Rhine-Westphalia and Hesse, where police are using Palantir programmes, the software is operated “exclusively” on police servers.

This article presents the analysis by Simon Dithelm Maier***, published in Legal Tribune Online on 14 August 2025, titled “Catching Criminals with Artificial Intelligence: Palantir as the ultima ratio?”****, which highlights the fact that while the use of artificial intelligence (AI) by the United States in investigations is likely to be successful, it simultaneously poses a threat to fundamental rights and endangers Germany’s digital sovereignty.

The tracking of potential criminals within seconds can be carried out using AI by searching and linking vast quantities of data from social media content, images, cell tower scans, reports, and case data. This promising approach for security agencies is offered by providers such as the US company Palantir Technologies. As an investigative tool, AI is both highly effective and high-risk. The more effective it is, the greater the dangers of misuse.

The use of the AI software “Gotham” by the American company Palantir is currently under discussion at both federal and state level in Germany. It is considered the world’s leading AI tool for automated data analysis. It was developed by Palantir in close cooperation with the CIA and other US agencies. Palantir’s main shareholder and co-founder, American billionaire Peter Thiel, is a supporter of US President Donald Trump.

Palantir is already in operation

Credit: Getty Images

The federal states of Hesse, Bavaria and North Rhine-Westphalia are already using Palantir’s software; implementation is planned in Baden-Württemberg. Other states, such as Berlin, Brandenburg, Hamburg and Schleswig-Holstein, have so far deliberately refrained from using it. While critics highlight the risks to the rule of law, supporters emphasise the system’s effectiveness.

The new German government also intends to enable security services to use artificial intelligence. According to the coalition agreement, “constitutional requirements” and “sovereignty” must be taken into account. The coalition has not yet made a final decision on which AI system the federal agencies should use. Federal Minister of the Interior Alexander Dobrindt is currently exploring the nationwide use of Palantir’s software, while Federal Minister of Justice Stefanie Hubig has expressed scepticism.

The informational self-determination of innocent people is also at risk

In its decision of 16 February 2023, the Federal Constitutional Court of Germany***** for the first time established specific constitutional requirements for the use of artificial intelligence by the police. Accordingly, the use of AI systems for automated data analysis infringes the fundamental right to informational self-determination (Article 2(1) in conjunction with Article 1(1) of the Basic Law) of all those whose data is used in the process. Due to the extensive volume of data collected – which also affects innocent individuals – the creation of personality profiles, the opacity of the software approach (the “black box” effect), and the risks of distortion and discrimination, the impact of AI-supported data analysis exceeds that of conventional data analysis. Its use is therefore subject to specific constitutional requirements. If automated data analysis allows for serious interference with the informational self-determination of those affected, it is only justified in order to protect particularly important legal interests against certain specific threats.

Is the State's Digital Sovereignty Being Respected?

Credit: Getty Images

The discussion surrounding Palantir’s software focuses less on the general dangers of using artificial intelligence and more on the fact that German security authorities are employing software from a private company based in a non-EU country. This raises the question of whether the use of Palantir's software is compatible with the requirements of digital sovereignty. "Digital sovereignty" refers to a state's control over the information technology systems it uses. Moreover, "digital sovereignty" has long been recognised as one of the key political objectives.

Some experts in the fields of artificial intelligence and digital data security, such as Prof. Dr Alexander Roßnagel, Ulrich Kelber, Vyacheslav Bortnikov and others, point out that the requirement for digital sovereignty has its constitutional foundation in the principle of the rule of law under Article 20(3) of the Basic Law of Germany and in the fundamental right to informational self-determination under Article 2(1) in conjunction with Article 1(1) of the Basic Law. The principle of the rule of law obliges the state to provide plausible justification for any interference with fundamental rights. If the state implements an AI system developed by a third party, it makes controlling the AI system more difficult. As a result, the state cannot rule out manipulation or misuse by unauthorised parties. The Federal Constitutional Court states: “If software from private entities or other states is used, there is [...] a risk of undetected manipulation or unnoticed access to data by third parties.”

From the existing case law of the Federal Constitutional Court, it is not yet clear whether stricter constitutional requirements apply to the use of AI systems originating from non-EU countries. However, due to the increased risks to fundamental rights, it is a compelling argument that the use of AI systems developed by third parties should be considered an exception requiring justification. The more relevant a system is to fundamental rights, the more difficult it becomes to justify its use. For AI systems that affect fundamental rights, priority should be given to domestic developments: in-house developments and components should be preferred over products from private foreign providers. European AI systems take precedence over those from non-EU countries, where lower data protection standards may apply. This holds true, at least, where domestic solutions are equally suitable. In practice, such alternatives are often lacking. However, if an appropriate European software solution becomes available in the future, it would be preferable to Palantir’s software. The principle of digital sovereignty therefore also contains a constitutional mandate for German security authorities to promote domestic development in order to create alternatives to Palantir’s software that respect fundamental rights.

Effectiveness Alone Is Not an Argument

The mere fact that Palantir’s software is more effective than comparable software currently available in Europe cannot, in itself, justify its use. Furthermore, the deployment of Palantir’s software must meet the strict test of proportionality. The case law of the Federal Constitutional Court of Germany recognizes that the use of AI-powered software for automated data analysis is subject to stringent proportionality requirements. The use of such software is disproportionate if the severity of the interference with the fundamental right to informational self-determination outweighs the potential benefits to security authorities. Since the use of AI software from non-EU countries entails increased risks to fundamental rights, it must be subject to particularly strict proportionality requirements.

Due to its extensive scope, the use of Palantir’s software for automated data analysis has significant implications for fundamental rights. Although there is currently no concrete evidence of manipulation or misuse by third parties, the functionality of Palantir’s software is only partially transparent to external observers. The less transparent a piece of software is, the harder it becomes to justify why a particular individual is the subject of a police measure. At the same time, error-prone or discriminatory algorithms are difficult to detect. Since data protection is given less emphasis in the US than in the EU, there are serious doubts as to whether European data protection standards were adhered to during the development of Palantir’s software.

In this context, the use of Palantir’s software can only be constitutionally justified if, and to the extent that, it is absolutely necessary due to overriding security interests of the federal government or a particular federal state. Whether such an urgent security need actually exists is highly questionable. The majority of federal states have so far refrained from using Palantir’s software without this having seriously compromised their security situations. As a general rule, Germany is considered a safe country. Objectively speaking, there are no continuously escalating security threats within the country that would necessitate the use of Palantir’s software.

Security at Any Cost?

Credit: Getty Images

Even if there were an urgent need for security, the use of Palantir’s software would only be permissible as a transitional measure until comparably effective European systems are introduced. Furthermore, German security authorities must test Palantir’s software prior to its deployment in order to identify algorithms prone to error or discrimination. Finally, security authorities must closely monitor and regularly evaluate the use of Palantir’s software. In practice, it is crucial to ensure that only trained personnel operate the software. The results generated by Palantir must not be accepted as accurate without question but should be critically assessed: Artificial intelligence is merely an auxiliary tool in investigative work; security authorities themselves must make and take responsibility for decisions. This general principle applies even more strongly when relying on AI systems from non-EU countries.

The debate surrounding the use of Palantir’s software illustrates the broader dilemma of constitutional law in the field of security: The Basic Law empowers and obliges the state to guarantee the safety of its citizens. However, the Basic Law does not require security at any price. The state must accept compromises on security where they are necessary in the name of freedom.

This maxim is especially relevant in the age of artificial intelligence: Security authorities may use AI in their investigative work, but they remain subject to fundamental rights and the rule of law. Whether an urgent need for security truly justifies the use of Palantir’s software is highly questionable. Therefore, it would be wise for the federal and state governments to prioritize and promote the development of domestic software solutions.

 

Translation and editing from German by Dr Sibila Ignatova, Attorney-at-Law

* 1. Palantir was founded in 2003 by Peter Thiel, a German-born American investor and one of the co-founders of PayPal. In 2004, Alexander Karp joined as CEO. The company is now publicly listed and valued at approximately $300 billion.

** 2. VeRA (Verfahrensübergreifende Recherche- und Analyseplattform) – Cross-procedure Research and Analysis Platform.

*** 3. The author is a legal trainee at the law firm Brock Müller Ziegenbein Rechtsanwälte Partnerschaft in Kiel. On behalf of the parliamentary group “Alliance 90/The Greens” in the state parliament of Hesse, the firm has filed a complaint before the State Constitutional Court of Hesse this year against the new key provisions of the Hesse Security and Public Order Act. This also relates to the use of Palantir’s software by security authorities in the state of Hesse.

**** 4. The article (in German) is available at: Mit Künstlicher Intelligenz Straftäter finden, Legal Tribune Online, 14.08.2025

***** 5. See the text of the Federal Constitutional Court decision of 16 February 2023

Последвайте ни в Google News Showcase, за да получавате още актуални новини.
Ивайло Ачев
Ивайло Ачев Отговорен редактор
Новините днес