Enterprise-Ready Generative AI and Large Language Models

Following the hype created by the launch of ChatGPT, many organizations are rightfully wondering how this technology can be effectively used in an enterprise context. Squirro recognizes the power of large language models and provides the solution to harnessing this power within your organization. Squirro does this by overcoming some of the current limitations concerning enterprise deployment, like hallucinations, integration into the existing systems/workbenches, and security and access rights concerns.

WHAT SQUIRRO OFFERS

The first LLM-enabled Semantic Enterprise Search and Insights Cloud:

  • Enterprise-ready integration of the powerful LLM functionalities into your organization's existing systems and workbenches through Squirro’s Insight Engine (Gartner MQ Visionary for Insight Engines 2021 and 2022) and proven Information Retrieval (IR) pipeline
  • Respecting internal requirements regarding data security, privacy, and access control lists (ACLs)
  • Counter LLM’s inherent hallucinations by referencing the relevant documents and providing access to the source of information for fact-checking
SOLUTION BENEFITS

Thanks to Squirro’s proven Insight Engine and the Information Retrieval (IR) approach, the solution offers the following benefits:

  • Revolutionized enterprise search with intuitive LLM features for question & answering
  • Increased efficiency and quality of numerous business processes through augmentation and automation
  • Summarization of actionable real-time insights in the form of user-friendly dashboards

APPLICABLE FOR A BROAD RANGE OF USE CASES

Squirro's IR pipeline has been implemented for many organizations with a broad range of use cases. With the integration of generative AI technology, imagine what it could mean for your business processes:

  • Enterprise Search: more than 264 hours saved per employee per annum
  • Knowledge Management: 38% efficiency increase in a range of business processes
  • Service Management: 30% reduction in the mean time to resolution<.li>
  • Risk, Audit & Compliance: 80% reduction of time needed to identify relevant risks
  • Sales Management: 5% Net New Revenue (NNR) increase thanks to significant improvements in customer experience based on new insights and personalization possibilities

WATCH THE SOLUTION IN ACTION

In a recent webinar Generative AI for Enterprise: How to Tame a Stochastic Parrot, Squirro presented the solution with a live demo

E-BOOK

Download the Generative AI for Enterprise e-book to gain insight into:

  • The challenges behind the implementation of generative AI in an enterprise context
  • Views of decision-makers across the US, EMEA, and APAC regarding the implementation and use cases of this revolutionary AI technology based on an empirical survey
  • How Squirro resolves the existing limitations of generative AI for internal deployment, and
  • How to kick-start the implementation of Squirro’s generative AI solution for your organization

HOW IT WORKS

After the user prompts a question, it will be reformulated and fed to Squirro’s IR platform, where all the organization's internal, external, and premium data sources are connected. If the user has access rights to the information, the answer is delivered in a dialogue window within the existing workbench of choice. Transparency and explainability are ensured by providing evidence for the answer and a link to the source of information. Follow-up questions will reference the dialogue and new evidence from all data sources.

FULLY EMBEDDED IN SQUIRRO’S SEMANTIC INSIGHT CLOUD PLATFORM

Squirro’s advanced IR stack featuring Composite AI and Insights Engine technologies includes all the necessary prerequisites for the successful deployment of LLMs in an enterprise context.

FAQs

During Squirro's webinar, Generative AI for Enterprise: How to Tame a Stochastic Parrot, Squirro received overwhelmingly positive responses from the audience and some very valid questions. Here you will find an overview of the most frequently asked questions and Squirro’s answers.

You already saw part of it in our vacation policy demo. When you ask the first question, “what is the vacation policy?” it comes back with a detailed policy response. But the follow-up question is then also answered from the policy, considering the previous answers from the chat. So we look at the new evidence, but we also look at your current chat content to answer your questions.

At Squirro, we have taken the approach of combining LLMs with Squirro’s information retrieval stack, making it possible to answer questions from your internal data. Advantage of using such an approach is that the answers can always be tracked back to the original document in your data and does not require a full re-training of the LLM.

In the demo shown during the webinar, we used the original OpenAI endpoint, which uses the same model that is also behind ChatGPT (GPT 3.5 Turbo), not the Azure ChatGPT API. But we also allow using models of your choice. So we don’t decide this for you.

Any data curation surely helps. However, the combination of Squirro’s Information Retrieval stack and LLMs allows for getting acceptable results with relatively “messy” input data.

Yes, we only had a little time to get into Squirro’s Sales Insights solution, which is integrated inside Salesforce. Currently, it is already possible to ingest previous customer communication not only from Salesforce but also from other internal systems (like e-mail or Service Management Systems) and public data sources and provide all of that in a unified chat interface inside the workbench of your choice.

There are several approaches. One variant that works well is the application of a Map Reduce approach, where you first try to find answers for documents individually. That way, you shrink the information you must put into a context window. That being said, the current context window of activity is about 4’096 tokens, which is already quite large.

Squirro can deal with any type of PDF, including PDFs that require optical character recognition. We also can, together with partner applications, work with voice-to-text and data-to-text. Excel files are also supported.

For the demo we use the OpenAI endpoint. For a customer deployment a dedicated model deployed to (private) Azure Cloud can be used.

The classification happens first in Squirro’s advanced Information Retrieval stack. That is one of the key advantages of the approach.

An on-premise deployment of the Squirro platform is currently possible but high-quality LLM models are not available in open source domain yet and hence can only be run on the private/public cloud of the relevant vendors. This might change in future.

Squirro’s Information retrieval stack already offers multiple query strategies, allowing us to tune the ranking of the retrieved documents based on the use-cases. So, yes this is possible already.

Yes, it’s technically possible, but it is a matter of the data availability in your internal systems or as premium data sources. Please reach out to Squirro’s experts to discuss the feasibility of your specific use case.

We provide the framework to run the LLMs on your internal data. So we provide entitlement handling, linkage back to the source for explainability and compliance, and other capabilities offered by our proven Information Retrieval stack.

This is a combination of Squirro’s Composite AI technologiy. We use our Information Retrieval system to enrich the data. We use knowledge graphs to get the right information. Insights are then developed as a combination of extracted information from NLP text classification, which is enriched further by combining the extracted insights with generative AI.

Entailment chains are used in natural language processing to determine the logical relationships between sentences and phrases. Instead of just believing the answer that the LLM is producing, we add a step that critiques and double-checks the generated response. Basically, It retrieves evidence of the response and indicates how likely it is that the response has been hallucinated. Thereby the user will always stay in control of the fact-checking process.

IMPLEMENT GENERATIVE AI

Are you interested in learning about the process for implementing generative AI for your business? Squirro can help! It starts with a brief call to discuss your needs.

This site is registered on wpml.org as a development site.