Generative AI for Enterprise: How to Tame a Stochastic Parrot

During the last couple of years of the 20th century, Google truly rose to prominence. Compared to the search engines of those days, it seemed clear that “search,” as defined before Google, was just about to become completely disrupted. However, nobody at that time was foreseeing what was about to really happen: the end of the traditional ways of selling advertising.

With Generative AI rapidly popularized by ChatGPT, we might face a similar Google moment right now. Besides the most recent hype, there are many ongoing discussions about what future impact it will effectively have, especially in the enterprise context.

However, unlike a parrot, ChatGPT uses probabilistic methods to generate its responses. This means that instead of simply repeating what it has seen before, it generates new responses by predicting what is most likely to come next based on the input it receives. This is why ChatGPT is referred to as a "stochastic" model – it generates responses based on probability rather than determinism.

For example, while ChatGPT is extremely advanced and can generate responses that are very human-like, it is still limited by the data it was trained on. It can sometimes produce nonsensical responses, lack context, or disregard existing access rights within a company.

To create true value in the enterprise context, there would be a need for Generative AI capabilities to be integrated into a semantic enterprise search engine and trained on internal data.

Complete the form on this page to watch the on-demand webinar.

Squirro Speakers

Squirro CEO and Co-founder Dorian Selz
Dorian Selz
Co-Founder and CEO
Saurabh Jain
Saurabh Jain
CTO
Thomas Diggelmann
Thomas Diggelmann
Machine Learning Engineer
Lauren Hawker Zafer
Lauren Hawker Zafer
Head of Training and Education
Moderator
On-Demand Webinar

The Webinar Discusses

Bias and Hallucination

Large language models (LLMs) can be biased due to the data they are trained on. How to deal with bias and hallucinations in an enterprise context.

Retraining on Internal Data

Public LLMs are trained on public data. To make them effective in an enterprise context one needs to retrain on enterprise data. How?

Integration with Existing Systems

Another challenge that enterprises face when adopting LLMs is integration with existing systems. We will discuss integration approaches.

Data Privacy and Security Concerns

Public LLMs are trained on massive amounts of public data. Enterprise data exposed to LLMs gets immediately integrated. Implementing robust security measures is key.

Cost

LLMs can be very expensive, both in terms of hardware and software costs. How can these new models be applied in an economically responsible way?

This site is registered on wpml.org as a development site.