How LLMs Have Changed UI and UX in Software Engineering

How LLMs Have Changed UI and UX in Software Engineering

June 21, 2024

Introduction

We all have used ChatGPT and other LLMs (Large Language Models) for various purposes. But do you know how big of an impact they are creating in the world of tech? LLMs have changed the way an engineer follows the Software Development Lifecycle and the way software engineers code. If anybody has an idea, all they need to know is how to explain their idea to LLMs to accomplish their dreams. Such an experiment with LLMs, have changed the complete User Experience part of a website in the world of Tech. Let’s dive in and see how LLMs have changed User Experience.

Tradition ways

Well before we see how LLMs have simplified things for developers, let us see how they were before LLMs. Before LLMs the user experience on the website was mainly through navigation and user help columns, which was limited but still useful for a lot of websites. There used to be predefined answers to questions that users used to ask regarding their issues. This was the main problem as then bots or pre-defined answers for questions were always limited and could not always solve issues the user used to face.

 

LLMs to better User Experience

Now that we know the problem, the solution is much easier and very effective. Since the introduction of the paper “Attention is All You Need” in 2017 about transformer architecture, many websites and applications have used LLMs for improving their User Experience. First by integrating LLMs by fine tuning them in the documentation of the website, the application, or the software, it becomes easier for the user to interact with and understand the product, that is the website or the application or the software. This allows the user to ask any kind of question, and since there are no predefined answers for pre-defined questions, the scope of information retrieval and product understanding increases for the user.

For example, let’s get to how to fine tune a LLM on insurance policies, which will increase the User Experience as whenever the user is generated with an insurance policy document. They can upload the document and interact with the LLM and understand each and every clause or the benefits or even the disadvantages of the policy they’ve opted for as the LLM will be fine tuned and can actually suggest the best policy as it will be trained on sufficient data to understand the best policy for a vehicle.

The first is, as always,

  • Data collection: We first gather a document that contains various insurance policies for one country, for example India. We do this so that the LLM can know all the policies that are available for the user. Then we also collect multiple insurance policy documents.
  • Data preprocessing: Next, we preprocess the documents by the 4 preprocessing steps of Natural Language Processing to create a knowledge base for the LLM to train on and to process information retrieval every time a user asks a question. This makes it easier to get an appropriate answer for a specific user.
  • Choosing the appropriate model: Since no LLM have been specially trained on the topic insurance, we can choose any LLM on the basis of hardware limitations. Let’s choose the GPT-2 model which is an open source LLM present in the HuggingFace model library.
  • Fine tuning the model: We fine tune the GPT-2 model by first loading the model using the HuggingFace library. Then we fine tune the model using LoRA, which is Low-Rank Adaption, which is mainly used to fine tune LLMs on specific tasks or use cases, like our problem.
  • Evaluation: Then we evaluate the model by passing a pdf to the model to ask it to use it as context for the questions that will be asked of it.
  • Deployment: We can then deploy the model on any insurance related websites to see how much the user traffic increases once this feature is added!

 

Challenges

The main challenges in training an LLM for a specific use case and deploying it on a website for better User Experience are the hardware limitations, cost for deployment, insufficient data for training, less performance after fine tuning, and many more.

But all this does not stop LLMs from giving a better User Experience than the old traditional ways. LLMs have increased web traffic across websites due to the feature of ease of understanding of data.

Future of User Experience and LLMs

Maybe in the future there will be LLMs that will give the best User Experience by generating the website when the user logs in or maybe even by creating new features for the ease of the user. The scope of LLMs in easing things for users and developers is exponential. We just have to wait and see.

Conclusion

There’s no end to the power of LLMs, as we can see in the application of the current world. Every business sector uses LLMs for any reason that they can get to increase customer base by increasing User Experience. While there are still many challenges, the potential of LLMs is growing exponentially in the User Experience domain. As we move through the AI age in the world, the relationship between LLMs and software engineering does not stop to redefine new possibilities for user-centric innovations.

 

Intrigued by the possibilities of AI? Let’s chat! We’d love to answer your questions and show you how AI can transform your industry. Contact Us