AI gives answers.
We give reasons.

AI solutions that are transparent by design.

Funded by

Business Finland Tampere University

Pilot partners include

Netum

Our Solution

Modern AI systems are often large, highly complex black boxes that provide little to no transparency into how their predictions are made. This makes it impossible to use them in highly regulated and critical use cases.


To solve this problem, we developed in the academy of finland project XAILOG a novel method for learning inherently interpretable AI models directly from data. Our solution provides a cost-efficient and transparent alternative to black-box techniques such as Random Forests and XGBoost, while maintaining competitive predictive performance.

Examples

IF   Uniformity of cell size ≥ 3.5
OR   Bare nuclei ≥ 3.5 AND 1.5 ≤ Uniformity of cell size < 3.5
OR   Single epithelial cell size ≥ 2.5 AND Uniformity of cell shape ≥ 3.5

THEN Malignant
ELSE Benign

Insights From the Data

Trained on the Wisconsin Breast Cancer dataset, this 3-rule classifier flags a tumor as malignant when cell sizes are highly uniform, when bare nuclei are enlarged alongside moderate cell size uniformity, or when epithelial cells are large with irregular cell shapes. Each rule captures a distinct hallmark of malignancy.

Accuracy: Our solution 96.4% — matching Random Forest (97%) and XGBoost (97%) while remaining fully interpretable.

IF   Duration ≥ 15.5 months AND No established credit history AND Unemployed
OR   Duration ≥ 15.5 months AND Has checking account
OR   Has checking account AND No established credit history AND No guarantor

THEN Bad Credit
ELSE Good Credit

Transparent Decisions

Trained on the German Credit dataset, this 3-rule classifier flags bad credit when long loan durations combine with no established credit history and unemployment, when longer loans are paired with an active checking account, or when a checking account holder has no established credit history and no guarantor backing the loan.

F1 score: Our solution 60.7% — outperforming Random Forest (59.4%) and XGBoost (58.9%) by catching more bad credit cases.

Black Box AI vs. Our Approach

Black Box AI

Neural network diagram representing black box AI
  • Predictions hard to explain or audit
  • Complex and large
  • Requires heavy computing infrastructure
  • Compliance risk under EU AI Act / GDPR

Our Solution

IF   Income < 40k  AND Employment < 2y
OR   Debt-to-income ≥ 45%
THEN Deny Loan
ELSE Approve Loan
  • Every prediction has an explanation
  • Short and human-readable
  • Cheap to deploy and maintain
  • Regulatory-compliant by design

Already Have a Model?

You don't have to replace your existing AI system. Feed your model's predictions into ours and get human-readable rules that explain its decision-making. Meet EU AI Act and GDPR explainability requirements without getting rid of your existing model. Uncover what your model actually learned, catch hidden biases, and build trust before deploying.

u

Benefits

Rivals Black-Box Accuracy

Approaches state-of-the-art performance on structured data.

Compact Results

Far smaller and more readable than decision trees.

Mathematical Guarantees

Backed by rigorous proofs on convergence and accuracy.

Runs on a Laptop

No GPU required. Fast training, instant inference.

What You Can Do With It

Predictions Shareholders Trust

Loan approvals, insurance claims, medical triage — when an AI makes a decision that affects someone's life, regulations demand an explanation.

EU AI Act & GDPR compliant by design

Discoveries Anyone Understands

Point our solution at your data and it gives you patterns your domain experts immediately understand. No data science degree required.

Turn data into actionable knowledge

Runs Anywhere

Our models are plain if/then rules. No cloud, no GPU, no ML framework required. Deploy them on edge devices, embedded systems, or microcontrollers.

Predictions in any environment

Our Team

A world-class team from Tampere University, advancing transparent and understandable AI through our research in AI and logic.

Antti Kuusisto

Antti Kuusisto

Professor of Mathematics
  • Led the Academy of Finland–funded XAILOG project
  • 20+ years of research in logic and AI
  • Publications in NeurIPS, AAAI, and JAIR
Tomi Janhunen

Tomi Janhunen

Professor of Computer Science
  • 25+ years of research in logic and AI
  • Chairman of the Finnish AI Society (2022–2023)
  • Multiple Best Paper awards
  • Publications in IJCAI, AAAI, JAIR, NeurIPS
Jussi Lemilainen

Jussi Lemiläinen

Business Lead
  • 25+ years as an executive and entrepreneur
  • Product development & productization
  • Financing and international sales
Reijo Jaakkola

Reijo Jaakkola

Doctoral Researcher, Mathematics
  • Publications in logic and explainable AI
  • Ernst Lindelöf Award for Master's thesis
  • Two-time AI hackathon winner
Veeti Ahvonen

Veeti Ahvonen

Doctoral Researcher, Mathematics
  • Logic and modern AI models (neural networks, transformers)
  • Work featured at NeurIPS and AAAI
  • 7+ years of industry collaboration

Become a Pilot Partner

We are looking for organizations that want to test our solution.

Free

No fees, no commitment. We are funded by Business Finland.

Tailored

We work directly with your team to adapt our solution to your domain, data, and requirements.

Easy to Integrate

Delivered as a library, plugin, or API. Fits into any existing platform or workflow. No infrastructure changes needed.

50+
Years of Research
Combined experience in logic and AI
150+
Publications
Combined publications of the team.
100%
Explainable
Every prediction has an explanation
0
GPUs Required
Trains on a laptop and runs on any hardware