Inside the Lab
The Environment
A controlled testing environment built for adversarial AI security research. Docker is the isolation layer — Ollama, PyRIT, Promptfoo, and all adversarial tooling run inside containers. API deployments are tested through controlled endpoint configurations.
Isolated Inference
Docker is the isolation layer. Ollama runs inside a container for local model inference — Llama-3, Mistral, Phi-3, and others supported. API-based targets are tested through controlled endpoint configurations. Every test environment is isolated, repeatable, and reproducible.
Structured Telemetry
Security is only as good as its logs. The Elastic Stack (ELK) captures every token and transition, turning raw adversarial sessions into structured data mapped to known vulnerability patterns.
Adversarial Stack
PyRIT, Promptfoo, and all adversarial tooling run inside Docker containers — alongside custom attacker strategies built to probe model behavior beyond what standard scans cover.
What Gets Tested Here
The lab environment supports adversarial testing across deployment types.
Local Deployments
Models running on-premise via Ollama, vLLM, or similar runtimes. Full isolation, no external dependencies during inference.
API Deployments
Models accessed through provider APIs (OpenAI, Anthropic, Azure OpenAI). Tested through controlled endpoint configurations with structured adversarial inputs.
Hybrid Architectures
Systems combining local inference with API-based components, RAG pipelines, or agent frameworks. Tested for vulnerabilities at integration boundaries.
From the Lab
Live screenshots from active testing sessions in the Intrenex Lab environment.

Elastic Stack dashboard — adversarial session telemetry indexed by strategy, turn depth, and token count.

Promptfoo scan matrix — structured pass/fail results across test categories.

PyRIT adversarial probe running against Llama-3-8B.

Docker containers running the full Intrenex testing stack.