OPEN RESEARCH

Open Problems in AI Governance

We publish the hardest unsolved questions in AI governance, causal AI, and policy-constrained systems. Solve one and become a Chimera Fellow.

#01HardOpen

Can formal policy constraints prevent prompt injection attacks?

Investigate whether CSL-Core constraints can define input boundaries that make prompt injection structurally impossible. Explore how constraint verification at the policy layer differs from output filtering approaches.

RELATED:InputBoundary.cslAgentSafety.csl
securityprompt-injectionformal-verificationApply →
#02HardOpen

How can AI agents produce verifiable compliance proofs?

Design a mechanism where AI agents generate cryptographic or logical proofs that their actions comply with a given CSL policy set. The proof should be independently verifiable without re-running the agent.

RELATED:ComplianceProof.cslAuditTrace.csl
complianceproofsauditApply →
#03IntermediateOpen

Can causal traces improve explainability of AI decisions?

Explore whether formal causal traces (as defined in CSL-Core) can serve as human-readable explanations for AI decision-making. Compare causal trace explainability against SHAP, LIME, and attention-based methods.

RELATED:CausalTrace.cslExplainability.csl
explainabilitycausal-aibenchmarksApply →
#04Open-EndedOpen

What is the minimal constraint system for safe autonomous agents?

Define the smallest set of CSL constraints that guarantee safety for a general-purpose autonomous agent. Investigate whether there exists a universal safety kernel that all agent architectures must satisfy.

RELATED:SafetyKernel.cslAgentSafety.csl
safetyautonomous-agentstheoryApply →
#05IntermediateOpen

Benchmark: LLM guardrails vs CSL formal constraints

Create a rigorous benchmark comparing traditional LLM guardrails (Constitutional AI, RLHF, output filters) against CSL-Core formal constraints. Measure violation rates, latency overhead, and coverage across adversarial scenarios.

RELATED:Benchmark.csl
benchmarksguardrailsevaluationApply →
#06HardOpen

Can CSL policies compose across multi-agent systems?

Investigate policy composition: when multiple agents each follow individual CSL policies, does the composed system satisfy a global safety property? Formalize conditions under which composition is safe.

RELATED:MultiAgent.cslPolicyComposition.csl
multi-agentcompositiontheoryApply →

SUGGEST A PROBLEM

Have an open question?

Submit a research question. If accepted, it gets published here with credit to you.

Sent to research@chimera-protocol.com

Think you can solve one? Join the Research Program.

Successful contributors become Chimera Fellows with recognition, portfolio credit, and ongoing research access.