RESEARCH & PUBLICATIONS

Papers, Articles & Open Research

Everything we publish is open. Every experiment is reproducible. Every result is verifiable.

arXivOct 202535 pages, 15 figuresPINNED

Beyond Prompt Engineering: Neuro-Symbolic-Causal Architecture for Robust Multi-Objective AI Agents

Gokturk Aytug Akarlar

We present Chimera, a neuro-symbolic-causal architecture that integrates an LLM strategist, a formally verified symbolic constraint engine, and a causal inference module. Chimera consistently delivers highest returns while improving brand trust, demonstrating prompt-agnostic robustness.

Read →
arXivOct 202517 pages, 4 figures

A Multi-Evidence Framework Rescues Low-Power Prognostic Signals and Rejects Statistical Artifacts in Cancer Genomics

Gokturk Aytug Akarlar

A five-criteria computational framework integrating causal inference with orthogonal biological validation. Applied to TCGA-BRCA mortality analysis, correctly identifying false positives and true signals through mutation pattern analysis.

Read →
ArticleFeb 2026PINNED

Your AI Doesn't Need Better Prompts. It Needs Laws.

Aytug Akarlar

Prompts are probabilistic suggestions. Production needs deterministic guarantees. Introducing CSL, a Solidity-like safety layer for AI.

Read →
ArticleMar 2026

AI Agents Need Governance, Not Prompts — I Built AI Governance Layer.

Aytug Akarlar

AI agents are everywhere. They approve expenses, write code, send emails, make hiring decisions. But here's the uncomfortable truth about how they're governed.

Read →
ArticleFeb 2026

I Benchmarked 4 Frontier LLMs as Security Guardrails. None of Them Passed.

Aytug Akarlar

A systematic evaluation of GPT-4.1, GPT-4o, Claude Sonnet 4, and Gemini 2.0 Flash against 22 adversarial attack scenarios, and what it means for AI safety.

Read →
ArticleFeb 2026

Chimera: Deterministic Control Layer for Safe, Legal AI Deployment

Aytug Akarlar

Every guardrail you've seen for LLMs follows the same pattern: natural language instructions stuffed into a system prompt, hoping the model cooperates. We built something different.

Read →
ArticleJan 2026

The Missing Link — Why Capital Isn't Flowing from the Real Sector to AI

Aytug Akarlar

The hype cycle is over. The audit has begun. Why the Real Sector rejects 'black box' probability and what AI governance can do about it.

Read →
ArticleOct 2025140 claps

Beyond Prompt Engineering: Neuro-Symbolic-Causal Architecture for Robust Multi-Objective AI Agents

Aytug Akarlar

Why architecture beats prompt engineering for safe, reliable autonomous AI. Published in Data Science Collective.

Read →
ArticleSep 202540 claps

Project Chimera: A Neural-Symbolic-Causal Hybrid AI Agent / One Step Closer to the Holy Grail

Aytug Akarlar

The Holy Grail of AI — Beyond LLMs, Towards True Hybrid Intelligence. Published in Data Science Collective.

Read →
ArticleSep 2025

Project Chimera: A Mathematical Proof Against AI Hallucinations

Aytug Akarlar

We don't just hope our AI is safe. We used a method from critical systems engineering to mathematically prove it. Here's how, and why.

Read →
ArticleAug 202520 claps

Causality in AI: Was Our Profitable Agent Secretly Losing Us Money?

Aytug Akarlar

A deep dive into causal inference for AI agents. How counterfactual reasoning revealed hidden losses in a seemingly profitable trading strategy.

Read →
ArticleAug 202551 claps

Causality in Artificial Intelligence: Dynamic Pricing with Causal Inference & Reinforcement Learning

Aytug Akarlar

Dynamic pricing with causal inference and reinforcement learning. Published in Data Science Collective.

Read →

Interested in contributing to our research?