By Gerardo Delgado
Publication Date: 2025-11-12 14:00:00
Large language model (LLM)-based AI assistants are powerful productivity tools, but without the right context and information, they can struggle to provide nuanced, relevant answers. While most LLM-based chat apps allow users to supply a few files for context, they often don’t have access to all the information buried across slides, notes, PDFs and images in a user’s PC.
Nexa.ai’s Hyperlink is a local AI agent that addresses this challenge. It can quickly index thousands of files, understand the intent of a user’s question and provide contextual, tailored insights.
A new version of the app, available today, includes accelerations for NVIDIA RTX AI PCs, tripling retrieval-augmented generation indexing speed. For example, a dense 1GB folder that would previously take almost 15 minutes to index can now be ready for search in just four to five minutes. In addition, LLM inference is accelerated by 2x for faster responses to user queries.
Turn Local Data Into Instant Intelligence
Hyperlink uses generative AI to search thousands of files for the right information, understanding the intent and context of a user’s query, rather than merely matching keywords.
To do this, it creates a searchable index of all local files a user indicates — whether…