Models, orchestration, evals, prompts. Our opinion on what's actually worth your time in the 2026 AI stack.
Prompts as programs you compile.
— Compelling research direction; not quite production-ready as of 2026.
Useful primitives, controversial abstractions.
— We took inspiration, not dependency. Worth reading even if you don't adopt.
Focused on RAG and document ingestion.
— Better defaults than LangChain if your problem is really retrieval.
UK AISI eval framework.
— What we'd use for before-you-ship LLM tests on any serious system.
OpenTelemetry semantic conventions for LLM traces.
— The emerging standard. We wired the VORLUX orchestrator against this spec.
Config-driven eval harness.
— Pairs well with CI. YAML-first approach that scales past one person.
GUI alternative to Ollama if you prefer a window to a terminal.
— Identical capability, friendlier onboarding. Good for client demos.
Local LLMs, one command.
— The backbone of VORLUX's local-first stance. Everyone should have this installed.
High-throughput inference server.
— When you outgrow Ollama and need real QPS — usually past 10 concurrent users.
Asistente IA — respuestas basadas en nuestra KB
Reciba respuestas personalizadas:
¿Qué le trae hoy?