OpenSearch

2025-11-19

OpenSearch 3.2: What's New?

OpenSearch 3.2 has been released, bringing a wide range of improvements across search, observability, and generative AI use cases. Below we take a closer look at the most notable changes.

Reading time: 6 minutes
Technical Content Creator

OpenSearch 3.2: What’s New?

OpenSearch 3.2 has been released, bringing a wide range of improvements across search, observability, and generative AI use cases. This release continues to build on the innovations introduced in the 3.x line, with a strong focus on performance, scalability, and flexibility for advanced workloads.
While some updates are incremental, others represent important steps toward making OpenSearch more efficient at scale and better suited for modern AI-powered applications. Below we take a closer look at the most notable changes.

Search Enhancements

Approximation framework: One of the most impactful updates in this release is the optimization of the approximation framework. Until now, search_after queries fell back to Lucene traversal, which often slowed down pagination in large datasets. With the new approach, these queries are converted into proper range bounds, dramatically improving responsiveness for dashboards, time-series workloads, and applications that require deep pagination. The framework has also been extended to support all numeric field types rather than just LONG, making it more versatile for analytics and filtering.
gRPC and Protobuf reach general availability: A major milestone in this release is the general availability of gRPC transport layer with Protocol Buffers support. This high-performance alternative to REST APIs uses a compact, binary format that reduces payload size and improves performance, particularly for bulk document ingestion and k-NN query operations. The gRPC implementation offers expanded search API functionality and encryption in transit, opening up new possibilities for performance-critical workloads and efficient client integrations.
Skip list parameter: Range queries and aggregations are often expensive to run, but with the new skip_list option, OpenSearch can now skip over irrelevant document ranges. This allows the system to process queries more efficiently, especially in datasets with wide numeric distributions.
Star-tree updates: Star-tree indexing also sees an update in this release. It now supports queries on IP fields, a change that is especially valuable for log analytics and security data. Administrators also benefit from new statistics that provide visibility into how star-tree queries are being executed, including metrics for total queries resolved, currently running queries, and total time spent on star-tree operations.
Streaming aggregation (experimental): Another major step is the introduction of streaming aggregation. Instead of pushing all aggregation results from data nodes at once, results are streamed incrementally back to the coordinator node. This reduces the memory burden on data nodes and shifts the responsibility to the coordinator, which is easier to scale.
Search relevance workbench (GA): Finally, the search relevance workbench is now generally available. This tool allows users to experiment with different ranking strategies, visualize query quality, and diagnose relevance issues in a more structured way. Together, these changes give OpenSearch 3.2 a stronger foundation for both traditional search and analytical scenarios.

Vector Search and Generative AI

Expanded GPU support: Vector search has become a central focus for OpenSearch, and version 3.2 continues this trend by introducing new capabilities that improve performance and scalability. GPU indexing now supports FP16, byte, and binary vector types in addition to FP32. These lower-precision formats reduce memory usage, which allows larger datasets to fit into GPU memory and minimizes costly data transfers between CPU and GPU. The result is more efficient processing and lower infrastructure requirements.
Boosting recall in compressed vector search: This release also tackles the challenge of compressed vector indexes, which are useful for reducing storage costs but can come at the expense of accuracy. OpenSearch 3.2 introduces two techniques—Asymmetric Distance Computation (ADC) and Random Rotation (RR)—that preserve search quality in quantized indexes. With these methods, recall rates improve significantly, making compressed indexes more suitable for production workloads where precision is important.
Neural Search plugin enhancements: The Neural Search plugin has also been expanded with new options for configuring dense and sparse embeddings. Users now have control over parameters such as compression level, text chunking strategy, and pruning methods. Additional settings like batch size and embedding reuse improve efficiency when indexing large volumes of text. These improvements make semantic search more adaptable to specific performance and relevance requirements.
Plan-execute-reflect agents (GA): On the AI agent side, plan-execute-reflect agents have reached general availability. These agents can break down complex tasks into smaller steps, execute them, and refine their approach through reflection. The 3.2 release includes enhancements that improve their prompts and give developers more control over message history.
Agentic search and memory (experimental): Two experimental features push the boundaries further. Agentic search allows users to issue queries in plain natural language, which the agent then translates into OpenSearch DSL. Agentic memory, meanwhile, gives agents the ability to persist knowledge across sessions, enabling more personalized and context-aware interactions.

Observability and analytics

Trace Analytics: OpenSearch has long been used for observability, and version 3.2 strengthens this capability. The Trace Analytics plugin now supports OpenTelemetry (OTel) data formats with enhanced compatibility through Data Prepper 2.11. Users can configure their OTel source with output_format: otel to retain standard OTel fields and metadata, streamlining the process of ingesting and analyzing traces. In addition, users now have more control over the complexity of service maps, as they can configure the maximum number of nodes and edges displayed.
PPL Calcite updates: The Piped Processing Language (PPL) also benefits from a major upgrade. By introducing new pushdowns and functions, including aggregation functions, filter function pushdown, and sort-merge-join pushdown, OpenSearch 3.2 significantly improves query execution performance and flexibility. These updates collectively boost performance, correctness, and usability for complex queries across OpenSearch data sources.
Prometheus exporter: Finally, the Prometheus exporter plugin has officially transitioned into the OpenSearch project. Although it is still released separately from the core distribution, it now follows the same release cadence and remains fully compatible with existing Prometheus scraping workflows. This ensures continuity for users while aligning the plugin more closely with the OpenSearch ecosystem.

Getting Started

OpenSearch 3.2 is available on the downloads page. You can explore features hands-on in the OpenSearch Playground or review the release notes for more details.
As always, the OpenSearch community is active on the forum and Slack — a good place to share feedback, ask questions, and connect with other users.

Looking Ahead

OpenSearch 3.2 represents a significant step forward in the platform’s evolution, particularly in AI-driven search capabilities and performance optimization. The general availability of gRPC/Protobuf support and plan-execute-reflect agents, combined with experimental features like agentic search and memory, signals OpenSearch’s commitment to becoming a comprehensive platform for both traditional search and modern AI applications.
With continued focus on scalability, performance, and developer experience, OpenSearch is positioning itself as a versatile solution for organizations looking to harness the power of search, observability, and generative AI in a unified platform. The community-driven development approach ensures that future releases will continue to address real-world needs while pushing the boundaries of what’s possible with open-source search technology.
Eliatra Newsletter
Sign up to the Eliatra Newsletter to keep updated about our Managed OpenSearch offerings and services!