Generative Pre-trained Transformer (GPT) models, the likes of which GPT-J and GPT-3 belong to, have taken the NLP community by storm. These powerful language models excel at performing various NLP tasks like question-answering, entity extraction, categorization, and summarization without any supervised training. They require very few to no examples to understand a given task and outperform state-of-the-art models trained in a supervised fashion.
GPT-J is a 6-billion parameter transformer-based language model released by a group of AI researchers called EleutherAI in June 2021. The goal of the group since forming in July of 2020 is to open-source a family of models designed to replicate those developed by OpenAI. Their current focus is on the replication of the 175-billion parameter language model, GPT-3. But don’t let the difference in parameter size fool you. GPT-J outperforms GPT-3 in code generation tasks and, through fine-tuning, can outperform GPT-3 on a number of common natural language processing (NLP) tasks. The purpose of this article will be to outline an array of use cases that GPT-J can be applied to and excel at. For information on how to fine-tune GPT-J for any of the use cases below, check out our fine-tuning tutorial.
The most natural use case for GPT-J is generating code. GPT-J was trained on a dataset called the Pile, an 835GB collection of 22 smaller datasets—including academic sources (e.g., Arxiv, PubMed), communities (StackExchange, Wikipedia), code repositories (Github), and more. The addition of Github into the data has led to GPT-J outperforming GPT-3 on a variety of code generating tasks. While “vanilla” GPT-J is proficient at this task, it becomes even more capable when one fine-tunes the model on any given programming language.
An increasingly common NLP use case is to build a chatbot. A chatbot is software that simulates human-like conversations with users via text message or chat. With its main commercial use case to help users by providing answers to their questions, chatbots are commonly used in a variety of customer support scenarios. However, chatbots can also be used to imitate specific people like Kanye West.
Regardless of your reason for using a chatbot, it is recommended to fine-tune GPT-J by providing transcripts of the specific task. For instance, let’s say you want a custom chatbot to assist with customer support requests. A simple method to curate a fine-tuning dataset would be to record transcripts of typical customer support exchanges between your team and customers. Somewhere in the order of one hundred or so examples would be enough for GPT-J to become proficient at your company’s specific customer support tasks.
Story writing is simply a work of fiction that is written in easily understandable grammatical structure with a natural flow of speech.
Story writing with GPT-J becomes interesting as one could fine-tune to a particular author’s writing style or book series. Imagine having a Stephen King writing bot or a bot that could help generate books 6 and 7 to Game of Thrones because, let’s be honest, George R.R. Martin is dragging his feet at this point.
Here’s an example of the begin ning to a fictitious piece written by GPT-J-6B:
The main purpose of entity extraction is to extract information from given text to understand the subject, theme, or other pieces of information like names, places, etc. Some interesting use cases for entity extraction include:
Financial market analysis: Extract key figures from financial news articles or documents to use as signals for trading algorithms or market intelligence
Email inbox optimization: Notify users of flight times, meeting locations, and credit card charges without having to open emails
Content recommendation: Extract information from articles and media to recommend content based on entity similarity and user preferences
GPT-J shines new light on entity extraction, providing a model that is adaptive to both general text and specialized documents through few-shot learning.
Summarization is the process of summarizing information in given text for quicker consumption without losing its original meaning. GPT-J is quite proficient out-of-the-box at summarization. What follows is an example of taking a snippet of the Wikipedia article for Earth and tasking GPT-J to provide a short summary.
Not to be confused with summarization, paraphrasing is the process of rewriting a passage without changing the meaning of the original text. Where summarization attempts to condense information, paraphrasing rewords the given information. While GPT-J is capable of summarization out-of-the-box, paraphrasing with GPT-J is best achieved through fine-tuning. Here is an example of paraphrasing a shorter snippet from the same Earth Wikipedia article in the previous summarization example after training on hand-written paraphrasing examples.
A widely used commercial use case for GPT-J and other transformer-based language models is copywriting for websites, ads, and general marketing. Copywriting is a crucial marketing process to increase website, ad, and other conversion rates. Through fine-tuning GPT-J on a given company’s voice or previously successful ad campaigns, GPT-J can automatically provide effective copy at a fraction of the cost of hiring a copywriter.
Text classification is the process of categorizing text into organized groups. Unstructured text is everywhere, such as emails, text conversations, websites, and social media, and the first step in extracting value from this data is to categorize it into organized groups. This is another use case where fine-tuning GPT-J will lead to the best performance. By providing one hundred examples or more of your given classification task, GPT-J can perform as good or better than the largest language models available like OpenAI’s GPT-3 Davinci.
Sentiment analysis is the act of identifying and extracting opinions within given text like blogs, reviews, social media, forums, new, etc. Perhaps you’d like to automatically analyze thousands of reviews about your products to discover if customers are happy about your pricing plans or gauge brand sentiment on social media in real-time so you can detect disgruntled customers and immediately respond, the applications of sentiment analysis are endless and applicable to any type of business.
Given the infancy of large transformer-based language models, further experimentation will inevitably lead to more use cases that these models prove to be effective at. As you may have noticed, a number of the use cases are the result of fine-tuning GPT-J. At Forefront, we believe the discovery of more use cases will not only come from increased usage of these models, but by providing a simple experience to fine-tune that allows for easy experimentation and quick feedback loops. For a tutorial on easily fine-tuning GPT-J on Forefront, check out our recent tutorial.
Start fine-tuning and deploying language models or explore Forefront Solutions.