Lens x Mirantis
Join the conversation
  • YouTube
  • LinkedIn
  • X
  • GitHub

Products

  • Lens K8S IDE
  • Lens Loop
  • Download
  • Pricing

Resources

  • Forums
  • Docs
  • Support
  • Status Page

Company

  • About
  • Blog
  • Contact Us

Compliance

  • Terms of Service
  • Privacy Policy
  • DPA
Join the conversation
  • YouTube
  • LinkedIn
  • X
  • GitHub
Copyright © 2026 Mirantis Inc. – All rights reserved.
Lens x Mirantis
  • Products
    Power Tools
    • Lens K8S IDEThe #1 IDE for Kubernetes
    • Lens LoopPower Tool for AI Apps Observability
  • Pricing
  • Blog
  • Community
  • Docs
  • Company
  • Contact
LoginDownload
Lens Loop IDE Logo

Lens Loop IDE

  • Overview
  • How it Works?
  • Features
  • Pricing
  • Download
[object Object]

Superpowers for Developers Working with LLMs

Loop gives developers and teams full visibility into every LLM call — from prompts and responses to tool invocations, latency, and cost. Built on OpenTelemetry for effortless observability at scale.

Signup for Closed BetaContact Sales
Lens Loop IDE

Observe. Understand. Improve.

Loop is built for developers, data scientists, ML engineers, and AI product teams who want to go beyond logs and guesswork. It provides complete visibility into everything your AI does — before, during, and after every LLM call.

  • WhatIsIt_contentTitle__H76fU

    Observe

    Gain full visibility into every step of your AI workflow — from prompts and RAG retrievals to tool calls, MCP executions, and model responses. With powerful filters and instant search, it's easy to trace behavior, identify anomalies, and stay in control of your LLM-powered application.

  • WhatIsIt_contentTitle__H76fU

    Understand

    Quickly identify where latency, cost, or quality issues occur. Explore full waterfall timelines, dependencies, and cause‑and‑effect relationships — enhanced by AI‑driven insights that highlight what truly matters.

  • WhatIsIt_contentTitle__H76fU

    Improve

    Compare prompt versions, evaluate outcomes, and optimize using real data — not guesswork. Understand how each change affects performance, cost, and quality to drive continuous improvement with confidence.

Curious how it works in practice?

Signup for Closed Beta

See Loop in Action

From first-time users to global platform teams, Loop delivers instant visibility and scales effortlessly with the growing complexity of your LLM workflows — whether you’re working locally, testing, or running in production.

  • Play Video

    Get Started in Minutes

    Go from zero to a fully observable LLM application in just minutes. Instantly capture and inspect every prompt, response, and API interaction — no complex setup required.

  • Play Video

    Debug, Analyze, and Improve — All in One Place

    See how Loop helps you understand the full lifecycle of your LLM workflows — from prompt to tool calls, retries, and responses.

A Developer Toolkit for Production-Grade AI

Loop gives you complete visibility into every step of your AI pipeline — not just the LLM request, but everything that happens before, after, and around it.

From single-agent prototypes to multi-model orchestration at enterprise scale, Loop gives you one pane of glass for the entire AI lifecycle — prompt, context, tool, and response.

Signup for Closed Beta
  • Observe

    Observe

  • Traces View
    Traces View

    Live stream of all LLM interactions, structured into traces and spans. Filter, search, and inspect what’s happening in real time.

  • Trace Preview Panel
    Trace Preview Panel

    Quickly see inputs, outputs, duration, and metadata of a span without leaving the trace list.

  • Span Labels & Types
    Span Labels & Types

    Automatic labeling for key span types like llm, tool-call, http, and mcp for easier classification and filtering.

  • Remote & Local Gateway Support
    Remote & Local Gateway Support

    Capture traffic from local development or deployed environments using Loop Gateway with full OpenTelemetry support.

  • OpenTelemetry Integration
    OpenTelemetry Integration

    Use OTEL SDKs (NodeJS, .NET, Python, etc.) to capture structured spans from your backend, tools, or custom logic.

  • Custom Headers Support
    Custom Headers Support

    Pass X-Loop-Project, X-Loop-Session, X-Loop-Custom-Label and custom labels to enrich trace data without extra configuration.

  • Understand

    Understand

  • Trace Details Panel
    Trace Details Panel

    Deep dive into each trace: view tokens, cost, duration, model parameters, tool responses, and user-visible outputs.

  • Trace Preview Panel
    Trace Preview Panel

    Visual timeline of span execution showing parallelism, dependencies, and latency bottlenecks.

  • Insights Panel
    Insights Panel

    Aggregated metrics across traces: averages, histograms, outliers, percentiles — instantly visible in the context of filters.

  • Type & Label Columns
    Type & Label Columns

    Identify and group trace traffic based on span type (llm, mcp, tool-call, etc.) and custom labels.

  • Insights Bar
    Insights Bar

    Always-visible summary bar showing metrics like avg duration, p95 latency, row count, and active filters.

  • Telemetry Breakdown
    Telemetry Breakdown

    Understand where costs, retries, or delays come from — token-level and step-by-step.

  • Improve

    Improve

  • Remix LLM Calls
    Remix LLM Calls

    Re-run past traces with new prompts, parameters, or models to test improvements safely and compare outputs.

  • Prompt Gallery
    Prompt Gallery

    Save, manage, and reuse effective prompts. Browse built-in templates or create your own for evaluation and scoring.

  • Prism AI Assistant
    Prism AI Assistant

    Your built-in AI copilot that understands your data. Ask questions about traces, anomalies, or metrics — and get instant answers in context.

  • Version Comparison
    Version Comparison

    Compare prompt versions or model settings over time — see which changes improved quality or reduced cost.

  • Uncategorized / Generic

    Uncategorized / Generic

  • Cross-Platform Compatibility
    Cross-Platform Compatibility

    Works seamlessly across macOS, Windows, and Linux — so every developer, data scientist, or ML engineer can use Loop effortlessly.

  • Secure by Design
    Secure by Design

    All data stays in your environment. Loop respects credentials, access controls, and enterprise security policies — no shadow access.

  • Developer-First UX
    Developer-First UX

    Built with the same design philosophy as Lens K8S IDE: powerful, fast, and intuitive. Every action feels natural in your daily workflow.

  • OpenTelemetry Native
    OpenTelemetry Native

    Full OTEL compatibility across products — giving your Kubernetes, backend, and AI pipelines a single, standards-based source of truth.

Ready to take a peek

into your LLM app black box
Sign Up for Closed BetaContact Sales

Trusted by the World’s Best Product Teams

From fast-growing startups to global enterprises, more than 1 million developers from the world’s top teams rely on Lens every day.

Abbot
accenture
Audi
Becton Dickinson
Capgemini
Cisco
Cognizant
Disney
Equifax
General Electric
Lockheed Martin
Microsoft
Nike
nvidia
Pfizer
P&G
Siemens
Verizon Connect