Your AI.
Trustworthy, certification-ready
and future-proof.

Building certification-ready AI as safe and reliable business companion.

trustworthy AI
genAI assisted image

We have gained the TRUSTIFAI certificate by TÜV AUSTRIA for a medical imaging application.

We are certified by the ISO 9001 standard for Quality System Management.

Why is trust essential to AI business?

Ultimately, AI systems are decision-making systems. But trust in those decisions is key to realizing the potential of AI. Even if you bring a human into the loop, trust in the AI suggestions is key to adoption and return on investment. This is especially true if your AI is used in a mission-critical area, such as finance, logistics, or R&D, or in a safety-critical domain, such as medical devices or driver assistance.

“As digital technology becomes an ever more central part of every aspect of people’s lives, people should be able to trust it. […] This is a chance for Europe, given […] its proven capacity to build safe, reliable and sophisticated products and services from aeronautics to energy, automotive and medical equipment.” [EU Commission White Paper]

The upcoming EU AI Act will regulate the use of AI in a wide range of high-risk areas, including infrastructure, employment, essential financial services, education, and others. AI won’t be banned in these areas, but scrutinized to ensure that it is safe, robust, and fair. If you operate in a high-risk area, you can prepare your AI for the coming regulation by investing in Trustworthy AI today.

genAI assisted image

How do we work?

How can we build trusted solutions with a young technology such as AI?

We think that there are three key ingredients:

  1. Transparency
    A system’s decision making needs to be transparent with respect to performance metrics and the data used to train the system.
  2. Systematic Development
    A trustworthy system needs to be developed with a thorough and systematic process based on established best practices.
  3. Certification
    The ultimate seal of trust is a certification of the system by a reputable independent certification body.

Know your AI

In a joint workshop, we assess your AI use-cases, challenges and opportunities. Through interviews and analyses we evaluate where you stand with respect to trustworthy AI, using an established catalogue of criteria based on upcoming industry standards. We then share a detailed gap analysis with you, and provide you with actionable recommendations on how close gaps, if there are any.

Go for certification

If you decide to get the ultimate seal of trust for your AI we are there to guide you through this complex process. We have developed AI applications, both in-house and with customers, which were certified by an independent certification body. Based on this experience, we can steer and coach you to certification smoothly and efficiently.

Get support

We are experts in data science, machine learning, and AI, but also in ML and data engineering, and operations, with dozens of projects under our belts. We can support you if you are up against a deadline, or need support closing the gaps.

Organization

Our multi-disciplinary teams are made up of product owners, data architects, business & legal consultants, as well as data scientists, analysts, engineers and data-ops specialists. We are spread across six data studios in Berlin, Hamburg, Stuttgart, Munich, Lisbon and Bucharest.

For a better understanding of our process, meet our AI Principal David in the video below.

Please accept statistics, marketing cookies to watch this video.

AI Act: A risk-based approach.

European Union has introduced the AI Act, a legislation focused on Artificial Intelligence. This act is expected to come into force no later than early 2025. It is designed to ensure the safety of AI especially in high-risk areas of applications, but also to ensure the safety of consumer products and services based on AI. It also aims at preventing discrimination through biased automated decisions as well as dissemination of flawed information.
The AI Act employs a tiered approach to regulation, imposing stringent standards especially on high-risk AI systems. It categorizes AI systems into five groups: those with unacceptable risks, which will be prohibited; high-risk systems, which will be subject to extensive regulation; Systems with a potentially high risk, such as general-purpose AI systems (GPAI), which will be subject to fundamental safety and transparency requirements; limited-risk systems, which will be monitored but not regulated; and minimal-risk AI systems, which do not require any regulatory oversight.

Does my AI-System qualify as high risk?

1 EU Product Safety Legislation

An AI System used in products falling under the EU`s product safety legislation, including toys, aviation, cars, medical devices or lifts, will be considered high-risk.

2 EU Regulated Domains

Any AI system that falls under one of the below will have to be registered in an EU database:

// Biometric identification
// Critical infrastructure
// Access to education
// (Access to) employment
// Access to essential public/private services
// Law enforcement
// Migration, asylum and border control
// Assistance in legal Interpretation

3 Generative AI

GenAI must comply with trasparency requirements and disclose that the content was generated by AI and where to find summaries of copyrighted data used for training. Furthermore the model must be designed to prevent it from generating illegal content.

Organizations which have integrated high-risk AI as part of their service are subject to extensive requirements under article 29 of the AI Act to ensure the safe utilization of these systems. 

These requirements can range from

  • the implementation of sufficient technical and organizational measures,
  • regular monitoring and adjustment of cybersecurity measures,
  • monitoring of the operation of the AI system,
  • possibly conducting a data protection impact assessment according to Art. 35 GDPR and publishing a summary,

to a risk impact assessment before the initial deployment of the AI system and more.

We do data, and talk about it.

Our applydata blog is a platform where our experts celebrate and publish their take on the latest in data, AI, regulation, business and service design.

We also speak at conferences, and we have recently hosted our own.

applydata blog >

Please accept statistics, marketing cookies to watch this video.

We are member of the KI Bundesverband (German AI Association), an association of 400+ AI companies dedicated to ethical AI.

We are member of Bitkom, Germany’s digital association representing more than 2,200 companies of the digital economy.

Let’s get started.

Let’s meet to discuss your AI use-case in the light of regulatory requirements according to the AI Act, and trustworthy AI.

Trust is a competitive advantage, even in unregulated areas of application.

Let’s see where you stand and how you can gain 100% client and stakeholder trust.

diconium:
Creating Digital Champions.

Swantje Kowarsch

Swantje

Managing Director
diconium data

LinkedIn

David Blumenthal-Barby

David

Principal Specialist AI Engineering

LinkedIn

Nadja Müller

Nadja

Legal
Engineer

LinkedIn

Neal Sinclair

Neil

Data
Scientist

LinkedIn

Kerstin Reckrühm

Kerstin

Data
Scientist

Arash Azhand

Arash

Senior Specialist AI Engineering

LinkedIn

Back to top