Join Our Research
Help build the future of trustworthy AI through structural constraint research
Why Work With Us
At Constraint Layer AI Research, you'll work on foundational problems that determine the future of human-AI interaction. Our small, focused team is developing the first proven solutions to AI hallucination and alignment failures.
Cutting-Edge Research
Work on breakthrough constraint architectures that major AI companies haven't solved. Publish in top venues and shape the field.
Real Impact
Your work directly improves AI safety and reliability. See your research deployed in production systems across multiple industries.
Small Team, Big Problems
Lead major initiatives from day one. No bureaucracy, just focused research on problems that matter.
Current Opportunities
Senior AI Safety Researcher
Full-time • Remote/Bangkok • Research Lead
Lead research on formal verification of constraint-based AI systems. Develop mathematical frameworks for proving behavioral properties and safety guarantees in structural fidelity architectures.
Key Responsibilities:
- Design and implement formal verification systems for AI constraint architectures
- Publish research in top-tier AI safety and verification venues
- Collaborate with industry partners on constraint system deployment
- Mentor junior researchers and guide research direction
Requirements:
- PhD in Computer Science, Mathematics, or related field
- Experience with formal methods, theorem proving, or verification systems
- Strong publication record in AI safety, verification, or constraint systems
- Fluency in Python and formal verification tools (Coq, Lean, Isabelle)
Constraint Systems Engineer
Full-time • Remote/Bangkok • Engineering
Build production-ready implementations of our Structural Fidelity Framework. Optimize performance, ensure scalability, and integrate with existing AI systems across multiple model families.
Key Responsibilities:
- Implement constraint-layer architectures for production deployment
- Optimize system performance and reduce latency overhead
- Develop integration APIs for enterprise customers
- Maintain compatibility across different LLM architectures
Requirements:
- MS/BS in Computer Science or equivalent experience
- 5+ years experience with large-scale ML systems
- Expert-level Python, experience with PyTorch/TensorFlow
- Background in distributed systems and API development
Research Scientist - Adversarial Testing
Full-time • Remote/Bangkok • Security Research
Discover new jailbreak techniques and test constraint system robustness. Lead red-team operations against AI systems and develop systematic evaluation frameworks for AI safety.
Key Responsibilities:
- Design novel adversarial attacks against AI safety systems
- Conduct systematic red-team evaluations of constraint architectures
- Develop automated testing frameworks for AI robustness
- Coordinate responsible disclosure with AI companies
Requirements:
- PhD in Computer Science, Cybersecurity, or related field
- Experience with adversarial ML, red-teaming, or security research
- Strong understanding of LLM architectures and training
- Publication record in security conferences or AI safety venues
Benefits & Culture
Remote-First
Work from anywhere with flexible hours. Our team collaborates across time zones with regular virtual meetings and annual in-person gatherings.
Research Freedom
20% time for independent research projects. Conference travel budget, publication support, and encouragement to explore novel directions.
Competitive Package
Market-rate compensation with equity participation. Health benefits, equipment budget, and performance-based bonuses.
Application Process
Apply
Send your CV and cover letter highlighting relevant experience and research interests.
Technical Screen
Discuss your background and work through relevant technical problems with our team.
Research Presentation
Present past work or propose a research direction relevant to our mission.
Team Fit
Meet the team and discuss how you'd contribute to our research culture and goals.
Ready to Shape AI's Future?
Join us in building the structural foundations for trustworthy artificial intelligence.