Overview
KRNL's Trustworthy AI initiative shifts AI from opaque blackbox processes to cryptographically verifiable, regulatorready systems. As a Trustworthy AI Systems Engineer, you will design and extend the core infrastructure by building opensource executor components, developing network attestation mechanisms and embedding zerotrust guardrails into AI workflows so that every model call, tool invocation and data transfer can be cryptographically verified, immutably auditable, and compliant by design.
This role bridges Web2 services, blockchain protocols and AI agents, giving developers full control over security policies, the flexibility to integrate custom logic and the ability to adapt to evolving regulatory requirements.
Key Responsibilities
- Build and maintain opensource executor components that handle Web2 API calls, blockchain interactions, data processing and multiprotocol workflows, embedding governance logic such as provenance verification, bias/drift checks and rolebased constraints.
- Develop modular orchestration frameworks that provide secure AI governance and zero-trust interoperability across single and multi-agent systems.
- Extend KRNL's customizable guardrails executor by creating custom rules, policies and detection logic for threat models and compliance requirements, including dynamic tool verification, automated threat detection, blocked response handling and realtime defense.
- Develop networkinterceptor middleware that intercepts and cryptographically signs all external interactions (requests, responses, DNS resolutions and network summaries) to make API calls and tool invocations transparent, auditable and verifiable.
- Collaborate on tethering mechanisms that record attested information onchain, ensuring immutable lifecycle governance with rollback capabilities.
- Work with Attestor components to sign execution results, monitor execution traffic and manage cryptographic keys. Implement sandbox environments (e.g., gVisor) and enforce resource constraints to ensure secure execution and prevent command injection.
- Support verifiable multiagent orchestration by ensuring each agent's actions are executed in sandboxed environments, monitored, and logged immutably.
- Provide developer tools and frameworks that allow dApp builders to configure executors, network policies and execution contexts, supporting custom logic while meeting regulatory requirements.
Required Qualifications
- 5+ years in developing secure distributed systems or blockchain protocols in languages such as Go, Rust or Python.
- Deep understanding of cryptographic primitives (signatures, hashing, key management) and networksecurity best practices, with familiarity in zerotrust architectures.
- 2+ years in building and maintaining highavailability APIs or orchestration frameworks.
- Experience with agentic frameworks (LangChain, AutoGen, CrewAI) and integrating AI systems.
- Solid understanding of AI lifecycle management including MLOps, and LLMOps practices.
- Experience implementing secure sandboxing and containerization technologies, including resource isolation and monitoring.
- Ability to translate regulatory and ethical requirements into technical controls (e.g., provenance tracking, bias detection and auditability).
Preferred Qualifications
- Experience with smartcontract development, crosschain communication or decentralized application design.
- Familiarity with guardrail frameworks (e.g., LlamaFirewall) and promptinjection mitigation techniques.
- Prior contributions to opensource projects and comfort working in a communityaudited ecosystem.