Skip to main content

Relative Insight and Large Language Models (LLM) FAQs

We have put together answers to the frequently asked questions about LLMs.

Trish Pencarska avatar
Written by Trish Pencarska
Updated this week

LLMs typically refer to Large Language Models. These are advanced natural language processing models that use deep learning techniques to understand and generate human-like text.

These models are trained on massive amounts of text data and can perform a wide range of language-related tasks, such as text generation, translation, summarization, question answering, and more.

Below, you will find more information on the use of AI and LLMs at Relative Insight.

What type of AI model is being used? For example, ChatGPT, Bard, etc.?

For LLMs, we are currently using a mixture of OpenAI’s GPT3, GPT3.5, and GPT4 and models hosted within AWS Bedrock (Claude).

As for other ML models, we use a variety of open-source and proprietary models which include (but are not limited to) Transformers, recurrent neural networks, convolutional networks, XGBoost, and more.

We are currently only using LLMs for summarizing and communicating data and do not rely on them for actual analysis of data, as it is neither auditable nor deterministic.

Other ML models are mainly used in our Natural Language Processing pipeline, turning unstructured text data into structured data for analysis. There are models at other parts of the platform that contribute to ease of use for data upload, data cleaning, and more.

All models, open or closed source and proprietary, are available for commercial use.

Does the model perform sentiment analysis?

We do have a model to perform sentiment analysis. All other models are for different purposes. For example, named entity recognition, semantic parsing, summarisation, POS tagging, tokenization, etc.

Is the model customized based on our requirements or is it standard for all companies?

Currently, all models are general-purpose for all customers.

Is the model interactive?

Not currently. A user will not be able to ask questions and get responses based on the model's generative AI capabilities.

Will the AI store our data?

No models we use, either in-house models or models accessed via API, store data in any way.

Will data be used to train the AI only for the specific users or will it be used to benefit all its customers?

Currently, no customer data is used to inform our models. We collect our own data for training purposes.

Does the model have the capability to flag inappropriate keywords, Personal Identifiable Information (PII), or IP addresses?

If necessary, personal PII data can be redacted prior to running through our NLP pipeline.

Does the model use LLM?

We use LLMs in select areas of the platform. Currently only for summarization and communication of data.

Did this answer your question?