IntroducingPrompt Compression and Query Optimization, a short course made in collaboration with MongoDB and taught by Richmond Alake, Developer Advocate at MongoDB.
In this course, you’ll learn to integrate traditional database features with vector search capabilities to optimize the performance and cost-efficiency of large-scale Retrieval Augmented Generation (RAG) applications.
You'll learn how to apply these key techniques:
Prefiltering and Postfiltering: Techniques for filtering results based on specific conditions. Prefiltering is done at the database index creation stage, while postfiltering is applied after the vector search is performed.
Projection: This technique involves selecting a subset of the fields returned from a query to minimize the size of the output.
Reranking: This involves reordering the results of a search based on other data fields to move the more desired results higher up the list.
Prompt Compression: To reduce the length of prompts, which can be expensive to process in large-scale applications.
You’ll also learn with hands-on exercises how to:
Implement vector search for RAG using MongoDB.
Develop a multi-stage MongoDB aggregation pipeline.
Use metadata to refine and limit the search results returned from database operations, enhancing efficiency and relevancy.
Streamline the outputs from database operations by incorporating a projection stage into the MongoDB aggregation pipeline, reducing the amount of data returned and optimizing performance, memory usage, and security.
Rerank documents to improve information retrieval relevance and quality, and use metadata values to determine reordering position.
Implement prompt compression and gain an intuition of how to use it and the operational advantages it brings to LLM applications.
Start optimizing the efficiency, security, query processing speed, and cost of your RAG applications with prompt compression and query optimization techniques.