AI-based Natural Language Processing (NLP) systems have the unique ability to extract structured pieces of information from unstructured raw text. This process of extracting and classifying bits of unstructured text into structured “entities” is called Named Entity Recognition (NER).
There are multiple types of entities that people extract from text. One of these is geopolitical entities. Geopolitical entities are references to countries, cities, states, and groups of people under a common geopolitical umbrella etc. For example, the sentence “American diplomat Matt arrived in Asia” contains the geopolitical entity “American” as well as the person entity “Matt”.
There are many reasons why people would need to automatically extract geopolitical entities from free text. Anyone following fast paced news cycles can use entity extraction to quickly categorize incoming news. Surveyors can use entity extraction to bucket survey responses by the respondent’s affiliation. Geopolitical entity extraction technology can benefit a lot of people but can be difficult to set up and use.
To make entity extraction as simple as an API call, we created Forefront Extract, an easy-to-use entity extraction API using state of the art AI. Using the API to extract geopolitical entities from free text is simple and requires only two inputs:
Let’s see the Extract API in action for the examples we outlined.
The constant firehose of international news can be overwhelming but with Forefront Extract, we can extract the relevant geopolitical entities from news articles to sort which news is the most urgent. Let’s use Forefront Extract to extract entities from some headlines:
We can see that the Extract API is able to extract these entities from the news headlines with no problem.
Anyone who has ever designed a survey knows that allowing a respondent to answer questions with free text allows them to freely explain their answers rather than be bound by pre-selected drop downs. For example, if we wanted to ask a person’s geopolitical affiliation, people can be as specific or vague as they wish:
We can see that with Forefront Extract, a survey bot can ask an open ended question and get nuanced answers like what state people are from rather than just saying “from the US”.
It’s clear from our examples that Forefront Extract can intelligently extract geopolitical entities and other entities from text. The possibilities unlocked by Forefront Extract are endless. Documentation for the Extract API can be found here.
Ready to sign up and get started? Sign up for Forefront today!
One of the main selling points of Natural Language Processing (NLP) systems is the ability to automatically extract pieces of information from raw unstructured text. There is an entire subfield of NLP dedicated to the process of extracting and classifying bits of unstructured text into structured “entities” called Named Entity Recognition (NER).
There are all kinds of entities that people want to extract from free text. One entity that is present in almost all pieces of text are people’s names. For example, the sentence “Matt works at Apple as an ML engineer.” contains the person entity “Matt” as well as the organization entity “Apple”.
There are plenty of reasons why people would need to quickly extract names from free text. Chatbot designers can use name entity extraction to gather relevant user information to faster solve queries. Home automation systems use name entity extraction to automate home functions like calling or messaging someone. Name extraction technology benefits all kinds of organizations and people but up until now any kind of entity extraction system has been difficult to use.
To make name extraction as simple as an API call, we created Forefront Extract, an easy-to-use entity extraction API using state of the art AI. Using the API to extract names from text is simple and requires only two inputs:
Let’s see the Extract API in action for the examples we outlined.
Forefront Extract can parse names from a conversation to personalize a chatbot while gathering relevant information about the user to use for the future.
We can see that with Forefront Extract, a chatbot can extract and store a user’s name and use it in context while also giving that information to any future agent to help personalize the conversation.
A home automation system (HAS) would not be complete without the ability to talk to a friend hands-free. For example if you wanted your HAS to call a friend and ask if they want to go shopping you might say something like: “Call Becky and ask her if she wants to go to Sam's Club”. Let’s see how Forefront Extract would handle this:
We can see that the Extract API knows to call Becky and not pull out “Sam” from Sam’s Club as a person’s name.
It’s clear from our examples that Forefront Extract can intelligently extract names and other entities from text. The possibilities unlocked by Forefront Extract are endless. Documentation for the Extract API can be found here.
Ready to sign up and get started? Sign up for Forefront today!
A useful function of Natural Language Processing (NLP) systems is extracting structured pieces of labeled information from free text. The process of extracting and labeling structured “entities” from raw text is called Named Entity Recognition (NER).
There are many entities that people would want to extract from text. One such entity is “events” - references to gatherings of people. For example, the sentence “Elizabeth is going to Art Basel” contains the event entity “Art Basel” as well as the person entity “Elizabeth”.
There are plenty of reasons why people need to automatically extract events from free text. Marketing organizations can use event entity extraction to categorize tweets about events they run to track engagement. Home automation systems can use event extraction to automate event information lookups. Event extraction technology benefits organizations and people but setting up a system like this from scratch can be difficult and cumbersome.
To make event extraction as simple as an API call, we created Forefront Extract, an easy-to-use entity extraction API. Using the API to extract events from text is simple and requires only two inputs:
Let’s see the Extract API in action for the examples we outlined.
Running a massive event is hard enough. Tracking engagement is another beast entirely. Marketing organizations may want to monitor tweets about events they run to see how often people are talking about their event. Let’s watch Forefront Extract pull out the referenced event from some recent tweets about VeeCon, a large web3/culture conference.
We can see that with Forefront Extract, The event “VeeCon” can be automatically extracted from both tweets even though it is used as a hashtag in the top tweet.
Home automation systems (HAS) can help you find information about local events but only if it can recognize events from your commands. For example if you wanted to locate a certain event nearby you might say something like: “Where is the Bay Area Book Festival?”. Let’s see how Forefront Extract would handle this:
We can see that the Extract API knows that “Bay Area Book Festival” is an event and can now better look up results.
It’s clear from our examples that Forefront Extract can intelligently extract events and other entities from text. The possibilities unlocked by Forefront Extract are endless. Documentation for the Extract API can be found here.
Ready to sign up and get started? Sign up for Forefront today!
A core function of Natural Language Processing (NLP) systems is the ability to pull out structured pieces of information from free text. This process of extracting and labeling structured “entities” from raw text is called Named Entity Recognition (NER).
There are all kinds of entities that people want to extract from text. A popular entity is “dates” - references to points in time. For example, the sentence “Elizabeth is going to Boston on Thursday” contains the date entity “Thursday” as well as the person entity “Elizabeth”.
There are many reasons why people would need to automatically extract dates from free text. Legal systems can use date entity extraction to categorize legal documents by referenced dates. Home automation systems use date extraction to automate home functions like setting reminders. Date extraction technology can benefit all kinds of organizations and people but up until now any kind of entity extraction system has been difficult to set up and use.
To make date extraction as simple as an API call, we created Forefront Extract, an easy-to-use entity extraction API. Using the API to extract dates from text is simple and requires only two inputs:
Let’s see the Extract API in action for the examples we outlined.
Legal documents can be long-winded and frankly tiring to read manually. Entity extraction systems can parse documents for referenced dates to help paralegals and lawyers sort through and categorize documents. Let’s watch Forefront Extract pull out dates from a recent patent.
We can see that with Forefront Extract, dates can be automatically extracted from long patent documents without having to manually sift through them.
Home automation systems (HAS) can help you set reminders hands-free by extracting dates from your commands. For example if you wanted your HAS to set a reminder to go shopping you might say something like: “Remind me to go shopping tomorrow”. Let’s see how Forefront Extract would handle this:
We can see that the Extract API knows that “tomorrow” is a date even while not explicitly mentioning a specific date on a calendar.
It’s clear from our examples that Forefront Extract can intelligently extract dates and other entities from text. The possibilities unlocked by Forefront Extract are endless. Documentation for the Extract API can be found here.
Ready to sign up and get started? Sign up for Forefront today!
Pulling out structured bits of information from unstructured text is a huge selling point of Natural Language Processing (NLP) systems. It’s such a big selling point there’s a name for the process of extracting and classifying bits of unstructured text into structured “entities” to better work with free text: Named Entity Recognition (NER).
There are all kinds of entities that people want to extract from free text. One of the most common entities is organizations. Organizations are references to organized groups of people. For example, the sentence “Matt works at Apple as an ML engineer.” contains the organization entity “Apple” as well as the person entity “Matt”.
There are plenty of reasons why people would need to quickly extract organizations from free text. Historians and other researchers can extract organizations from documents to expedite research on their particular research topic. Any person or company can extract organizations from incoming news articles to quickly bucket and sort through the firehouse of news. It’s clear that many people can benefit from organization extraction technology but up until now entity extraction systems have been difficult to set up and use.
To make organization extraction as simple as an API call, we created Forefront Extract, an easy-to-use named entity extraction API. Using the API to extract organizations from text is simple and requires only two inputs:
Let’s see the Extract API in action for the examples we outlined earlier.
Forefront Extract can help historians by automatically extracting organizations from documents so they can focus on the most relevant pieces of text. Let’s watch Forefront Extract recognize what organizations are mentioned in the Chinon Parchment - a document from 1308 about the Templar Knights.
With Forefront Extract, researchers can quickly categorize documents both historical and otherwise based on the organizations they mention.
The constant influx of headline-worthy news can be overwhelming for many but with Forefront Extract, we can extract the relevant organizations from news headlines/articles to sort which news is relevant. For example if we want to filter out news that is about the Target corporation, we can use Forefront Extract to extract organizations from one news headline that is about Target and one that could have been confusing had we just used a keyword search.
We can see that the Extract API recognizes that the first headline is about the organization “Target” whereas the second headline merely uses the capitalized word in context with the F.B.I.
It’s clear from our examples that Forefront Extract can intelligently extract organizations and other entities from all kinds of text. The possibilities unlocked by Forefront Extract are virtually limitless. Documentation for the Extract API can be found here.
Ready to sign up and get started? Sign up for Forefront today!
The ability to pull out structured bits of information from unstructured free text has always been a selling point of Natural Language Processing and AI systems. Specifically Named Entity Recognition (NER) describes the process of extracting and classifying bits of unstructured text into structured “entities” to better work with free text.
There are many types of entities that people want to extract from free text. One of the most common entities is locations. For example, the sentence “Clark works out of San Francisco.” contains the location entity “San Francisco” as well as the person entity “Clark”.
There are countless reasons why people would need to quickly and seamlessly extract locations from free text. Customer support organizations can route customer queries to the proper support center if they know exactly where the customer is chatting in from. Disaster tracking systems can use location extraction to quickly assign locations to disaster reports to notify the right people at the right time. Many people can benefit from location extraction technology but up until now it has been difficult to set up and use.
To make location extraction as simple as an API call, we created Forefront Extract, an easy-to-use entity extraction API. Using the API to extract locations from text is easy and only requires two inputs:
Let’s see the Extract API in action for the examples we’ve outlined above.
Forefront Extract can level up any customer support organization by automatically extracting the incoming locations from customer queries so that a routing system can use them to assign the chat to the proper agent.
With Forefront Extract, an AI based routing system can use automatically extracted customer locations to send people to the proper support center to get the best help possible.
It can be crucial and even life-saving to monitor and track earthquake reports in real time to be able to notify people in the area as quickly as possible. Let’s use Forefront Extract to extract locations from tweets about earthquake reports.
We can see that the Extract API is intelligent enough to recognize the locations in both tweets and a system based on Forefront Extract could send out alerts to people in those areas when earthquake reports like these come up.
It’s clear from our examples that Forefront Extract can intelligently extract locations and other entities from all kinds of text. The possibilities with what we can do with Forefront Extract are virtually limitless. Documentation for the Extract API can be found here.
Ready to sign up and get started? Sign up for Forefront today!
Natural Language Processing (NLP) is a subfield of AI that aims to help machines make sense of how humans write and speak. This is made particularly difficult by the fact that language by nature can be messy and inconsistent. In an effort to standardize text a subfield of NLP called Named Entity Recognition (NER) sprang up to extract and classify elements of unstructured text into a structured dataset to better help categorize and handle free text.
For example, the sentence “Jeremy is going to New York City on Thursday to attend Stern's Gala on American Business.” contains at least six different entities within it. We have the subject’s name (Jeremy), a date (Thursday), and a geopolitical entity (American) to name a few. Entity extraction, the act of identifying and classifying these elements of text, has long been a vital part of NLP.
Customer support organizations use entity extraction to route customer queries to the proper support center. Financial firms extract entities from news articles and financial documents to quickly sort incoming news and information. The applications of entity extraction are virtually limitless but the technology has up until now been for the very technical minded to set up and use.
That’s why we created Forefront Extract, an easy to use entity extraction API. Extracting high-quality entities from text of any length is now only an API call away. Using the API is easy and only requires two inputs:
"locations" - references to geographical locations like New York, or Istanbul
"people" - references to people generally by their name
"events" - references to a planned occasion like a meeting or a Gala
"organizations" - references to groups of people like Apple or a local food bank
"dates" - references to times like Thursday or simply “tomorrow”
"geopolitics" - references to large political entities like Americans or a government
Let’s see the Extract API in action.
Forefront Extract can take a free text query from a customer and provide the necessary tags that a routing system can use to assign the proper agent to the customer.
We can see that with Forefront Extract, a routing system can automatically recognize a customer’s location and send them to the proper support center with structured information so now the new agent will know the customer’s name and where they live.
The constant influx of financial news can be overwhelming but with Forefront Extract, we can extract the relevant entities from news articles to sort which news is the most urgent. Let’s use Forefront Extract to extract organizations from one news headline that is about Target and one that could have been confusing with just a keyword search.
We can see that the Extract API is smart enough to recognize that the first headline is about the organization “Target” whereas the second headline merely uses the capitalized word in context with the F.B.I.
It’s clear from our examples that Forefront Extract can intelligently extract entities from all kinds of text from multiple domains. The possibilities with what we can do with Forefront Extract are boundless. Documentation for the Extract API can be found here.
Ready to sign up and get started? Sign up for Forefront today!
If you want to get anything done, chances are you have to communicate with someone else. Transcripts are artifacts of our conversations with others and the ability to distill transcripts into short form summaries can unlock all kinds of previously unattainable analyses.
Customer support organizations can use summaries of conversations their customers have with their agents or chatbots to track evolving customer queries. They can even cluster these summaries to create visual representations of their customers’ needs. People can recap interviews with their favorite celebrities and role models to stay up to date with their lives. Economists can summarize interviews with key figures to keep up with rapidly shifting policy and news.
Modern large language models excel at creating high-fidelity abstractive summaries that can freely paraphrase source transcripts into easy to read text snippets with varying degrees of compression. These models are powerful but can be hard to set up and get started with. Forefront Summarize is an easy to use abstractive text summarization API that can generate high-quality dialogue summaries of any length. Using the API to summarize dialogue is easy and only requires two inputs:
Let’s see the Forefront Summarize API in action.
Interviews are often people’s favorite portions of TV shows and podcasts but with so many out there it can be daunting to try and keep up. Forefront Summarize can reduce entire interviews down to the most important points to give us what we need. Let’s use Forefront Summarize to summarize a 60 minute interview with Trevor Noah.
With a compression level of 5 we see over 80% of the character count replaced with the key facts of the interview. Imagine reading the insights from 5 interviews in the same time it would take to read the entire transcript of 1 interview.
Forefront can tackle dialogue summaries from nearly any field. Let’s use Forefront Summarize to sum up a recent interview with current Fed Chairman Jerome Powell.
The interview saw a reduction in character count by nearly 90% and still conveys the key economic points that Chairman Powell is trying to get across.
Forefront Summarize can transform a multi-turn dialogue between a customer and a customer support agent into a brief or longer-form summary by setting the desired compression level.
We can see that the lower compression level of 1 gives us more information about the customer’s sentiment where the highest compression level of 5 just gets straight to the point about what happened.
It’s clear from our examples that Forefront Summarize can summarize all kinds of dialogue from different domains. The possibilities with what we can do with Forefront Summarize are virtually limitless. Documentation for the Summarize API can be found here.
Ready to sign up and get started? Sign up
Summarizing long pieces of text and dialogue is an exciting field of AI with applications in nearly every field of research and business. Customer support organizations use summaries of conversations with customers to track evolving customer queries. Financial firms summarize news articles and financial documents to keep up with rapidly shifting markets. Medical institutions rely on summaries for medical notes and research articles to efficiently handle high volumes of patients while keeping up to date with the most recent clinical discoveries.
There are in general two kinds of AI-based text summarization: extractive summarization where the summaries must be quoted exactly from the source material and abstractive summarization where the AI is free to paraphrase information to be more concise. The latest large language models excel at creating high-fidelity abstractive summaries but can be hard to set up and use.
That’s where Forefront Summarize, an easy to use abstractive text summarization API, comes in! Generating high-quality text or dialogue summaries of any length is now only an API call away. Using the API is easy and only requires two inputs:
Let’s see the API in action for the examples above.
Forefront Summarize can turn a multi-turn dialogue into a brief or longer-form summary by setting the desired compression level.
We can see that the lower compression level of 1 gives us more information about the customer’s sentiment where the highest compression level of 5 just gets straight to the point about what happened.
The constant firehose of news can be overwhelming but with Forefront Summarize, we can reduce entire news articles down to the most important points to give us the news we crave on the go. Let’s use Forefront Summarize to summarize a New York Times Article on gig-work.
With a compression level of 5 we see over 90% of the character count replaced with the key facts of the article. Now you can get the facts from 10 articles in the same time it would take to read 1!
Forefront can tackle summaries in nearly any domain. Let’s use Forefront to summarize a medical research article on pre-surgery beverage consumption.
It’s clear from our examples that Forefront Summarize can summarize all kinds of text and dialogue from many domains. The possibilities with what we can do with Forefront Summarize are virtually limitless. Documentation for the Summarize API can be found here.
Ready to get started? Sign up for Forefront!
Less than two weeks ago, EleutherAI announced their latest open source language model, GPT-NeoX-20B. Today, we’re excited to announce that Forefront is the first platform where you can fine-tune GPT-NeoX, enabling our customers to train the largest open source language model on any natural language processing or understanding task. Start fine-tuning GPT-NeoX for free
The same fine-tuning experience our customers have come to know with GPT-J will be offered for GPT-NeoX including free fine-tuning, JSON Lines and text file support, test prompts, Weights & Biases integration, and control over hyperparameters like epochs and checkpoints. We look forward to seeing all the ways our customers will fine-tune GPT-NeoX models to solve complex NLP problems at scale. Let’s take a closer look at fine-tuning.
What is fine-tuning?
Recent research in Natural Language Processing (NLP) has led to the release of multiple large transformer-based language models (LLMs) like OpenAI’s GPT-[2,3], EleutherAI’s GPT-[Neo, J], and most recently, GPT-NeoX-20B, a 20 billion parameter language model, by EleutherAI. One of the most impactful outcomes of this research has been the finding that the performance of LLMs scales predictably as a power-law with the number of parameters; the downside of scaling parameters being the increased cost to fine-tune and inference. For those not impressed by the leap of tunable parameters now in the tens of billions, the performance that these models can achieve on a variety of tasks after fine-tuning just a few epochs on as little as 100 training examples is where you start to see the value.
Fine-tuning refers to the practice of further training language models on a dataset to achieve better performance on a specific task. This practice can enable a model to outperform one 10x its size on virtually any task. As such, fine-tuned models are the majority of models deployed in production on the Forefront platform and where businesses get the most value.
Until now, one had to choose between GPT-J’s 6 billion parameters and GPT-3 Davinci’s 175 billion parameters. The former model small enough to fine-tune and inference cost efficiently, but not big enough to perform well on complex tasks. The latter model big enough to perform well on complex tasks, but incredibly expensive to fine-tune and inference. Enter GPT-NeoX-20B, and solving many more complex NLP tasks at scale starts to look doable. Let’s look at how GPT-NeoX fine-tuned on various tasks compares to vanilla GPT-NeoX and GPT-3 Davinci.
Text summarization
Summarize text into a few sentences.
Emotion classification
Classify text as an emotion.
Question answering
Answer natural language questions about provided text.
Chat summarization
Summarize dialogue and transcripts.
Content generation
Write a paragraph based on a topic and bullet point.
Question answering with context
Answer natural language questions based on the provided information and scenario.
Chatbot with personality
Imitate Elon Musk in a conversation.
Blog idea generation
Generate blog ideas based on a company name and product description.
Blog Outline
Provide a blog outline based on a topic
How to fine-tune GPT-NeoX on Forefront
The first (and most important) step to fine-tuning a model is to prepare a dataset. A fine-tuning dataset can be in one of two formats on Forefront: JSON Lines or plain text file (UTF-8 encoding). For the purpose of this example, we’ll format our dataset as JSON Lines where each example is a prompt-completion pair. Here are some example dataset formats for the emotion classification, text summarization, question answering, and chat summarization use cases above.
After uploading your dataset, you can set the number of epochs your model will train for. Epochs refer to the number of complete passes through a training dataset, or put another way, how many times a model will “see” each training example in your dataset. A range of 2-4 epochs is typically recommended depending on the size of your dataset.
Next, you’ll set a number of checkpoints. Checkpoints refer to how many model versions will be saved throughout training. Training a model for the optimal amount of time is incredibly important, and checkpoints lets you easily find the optimal time by comparing performance between models at different points during training. Performance is compared by setting test prompts.
Test prompts are a simple method to validate the performance of your model checkpoints. They work by adding prompts and parameters for each model checkpoint to provide completions. After training, you can review the completions from each checkpoint to find the best performing model.
Alternative ways to fine-tune GPT-NeoX
Alternatively, you could fine-tune GPT-NeoX on your own infrastructure. To do this, you'll need at least 8 NVIDIA A100s, A40s, or A6000s and use the NeoX Github repo to preprocess your dataset and run the training script. The script will need to be run with the many degrees of parallelism that EleutherAI's repo supports.
Helpful Tips
These tips are meant as loose guidelines and experimentation is encouraged.
At Forefront, we believe building a simple, free experience for fine-tuning will lower the cost of experimentation with large language models enabling businesses to solve a variety of complex NLP problems. If you have any ideas on how we can further improve the fine-tuning experience, please get in touch with our team. Don't have access to the Forefront platform? Get access
A few days ago, EleutherAI announced their latest open source language model, GPT-NeoX-20B. Today, we’re excited to announce that GPT-NeoX is live on the Forefront platform, and the model looks to outperform any previous open source language model on virtually any natural language processing or understanding task. Start using GPT-NeoX
We are bringing the same relentless focus to optimizing cost efficiency, throughput, and response speeds as we have with GPT-J. Today, you can host GPT-NeoX on our flat-rate dedicated GPUs at 2x better cost efficiency than any other platform.
The full model weights for GPT-NeoX will be downloadable for free from February 9, under a permissive Apache 2.0 license from The Eye. Until then, you can use the model on the Forefront platform. We look forward to seeing all the ways our customers use GPT-NeoX to build world-changing applications and solve difficult NLP problems.
Let's take a more technical look at the model.
GPT-NeoX-20B is a transformer model trained using EleutherAI’s fork of Microsoft’s Deepspeed which they have coined “Deeperspeed”. "GPT" is short for generative pre-trained transformer, "NeoX" distinguishes this model from its predecessors, GPT-Neo and GPT-J, and "20B" represents the 20 billion trainable parameters. The approach to train the 20B parameter model includes data, pipeline, and model parallelism (”3d parallelism”) to maximize performance and training speed from a fixed amount of hardware. Transformers have increasingly become the model of choice for NLP problems, replacing recurring neural network (RNN) models such as long short-term memory (LSTM), and GPT-NeoX is the newest and largest open source version of such language models.
The model consists of 44 layers with a model dimension of 6144, and a feedforward dimension of 24576. The model dimension is split into 64 heads, each with a size of 96. Rotary Position Embedding (RoPE) is applied to 24 dimensions of each head. The model is trained with a tokenization vocabulary of roughly 50,000. Unlike previous models, GPT-NeoX uses a tokenizer that was trained on the Pile along with added special tokens like multiple white spaces to make code more efficient.
GPT-NeoX was trained on the Pile, a large-scale curated dataset created by EleutherAI.
GPT-NeoX was trained as a causal, autoregressive language model for 3 months on 96 NVIDIA A100s interconnected by NVSwitch, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
GPT-NeoX learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at generating text from a prompt due to the core functionality of GPT-NeoX being to take a string of text and predict the next token. When prompting GPT-NeoX it is important to remember that the statistically most likely next token is often the one that will be provided by the model.
See how GPT-NeoX compares on task accuracy, factual accuracy, and real-world use cases.
While Davinci still outperforms due to its 10x larger parameter size, GPT-NeoX holds up well in performance and outpaces other models on most standard NLP benchmarks.
The model excels at knowledge-based, factual tasks given the Pile contains a lot of code, scientific papers, and medical papers.
The following comparisons between GPT-J and GPT-NeoX use the same prompts and parameters. Completions are provided by the general model weights for each model. Keep in mind that fine-tuning will achieve significantly better performance.
Text to command
Translate text into programmatic commands.
Product description rewriting
Generate a new product description based on a given tone.
Named Entity Recognition
Locate and classify named entities mentioned in unstructured text into pre-defined categories such as person names, organizations, locations.
Content generation
Generate structured HTML blog content.
Summarization
Summarize complex text into a few words.
Product review generation
Generate a product review based on a product description.
Code generation
Create code based on text instructions.
A unique aspect to GPT-NeoX is that it fills a gap between GPT-3 Curie and Davinci, pushing the edge of how large a language model can be while still reasonable to fine-tune without incurring significant training or hosting costs. We’ve seen a majority of our customers get the most value out of fine-tuning GPT-J, and we expect GPT-NeoX to be no different. For this reason, we’re enabling customs to fine-tune GPT-NeoX models for free. Stay tuned for a blog post comparing fine-tuned GPT-NeoX models with the standard GPT-NeoX model.
Our team is currently working to release GPT-NeoX fine-tuning within 24-48 hours. You can start using GPT-NeoX here. Please contact our team for specific questions or help related to your use case.
Forefront now offers GPT-J powered emotion classification, allowing developers and businesses to classify text across 27 different types of emotion.
Advances in natural language processing (NLP) technology have been coming at a fast pace over the past few years, allowing for new solutions to previously difficult problems. GPT-J is a recent development by the open source community, EleutherAI, and at Forefront, we have expanded its already vast capabilities by releasing specialized pre-trained models to common NLP problems.
Sentiment analysis has been an important task for businesses who want to automate the process of determining whether customer reviews, feedback, or messages are positive, negative, or neutral. However with just three types of sentiment, traditional sentiment analysis falls short in many scenarios. Being able to understand specific emotions behind text would open new doors to what is possible. Negative text could be further classified as anger or disapproval while positive text could be distinguished between caring or admiration.
Previously, classifying the emotions behind text has required humans to read while empathizing with the writer to understand the underlying emotion. This is an inefficient way to sort through any large amount of reviews, feedback, etc. Large language models like GPT-J have opened the door to doing this task in an automated way, enabling businesses to classify emotions behind text faster and cheaper than ever before.
But don't take our word for it. The following examples are unedited and generated by just providing the text to be classified.
As you can see from those examples, our Emotion Classifier accurately analyzes text and the underlying emotions of the writer.
Using this solution takes one click to deploy, and gives you an HTTP endpoint and playground to send any length of text to receive emotion classifications with associated probabilities. The model classifies according to 27 different types of emotion covering a wide spectrum of possible emotions.
Emotion Classifier is one of many Forefront solutions you can access on our platform. It's deployed to dedicated replicas giving you unlimited access at a flat price, so you can scale to high usage without breaking the bank. Start using Emotion Classifier
The 27 different types of emotion that can be classified are:
Forefront now offers GPT-J powered text summarization, allowing businesses to get high quality summaries for any length of text in a single request.
Advances in natural language processing (NLP) technology have been coming at a fast pace over the past few years, allowing for new solutions to previously difficult problems. GPT-J is a recent development by the open source community, EleutherAI, and at Forefront, we have expanded its already vast capabilities by releasing specialized pre-trained models to common NLP problems.
Many businesses capture and use text or documents as part of their products or internal processes, and this text can be very long and difficult to maneuver. Providing high quality summaries of every part of the text can open new doors to solving organizational and customer problems better than ever before.
Previously, summarizing long amounts of text like documents, articles, or books required humans spending many hours to accurately summarize text making it impossible to scale without spending a lot of money. Large language models like GPT-J have opened the door to doing this task in an automated way, enabling businesses to summarize any type of text 100x faster than previously possible while saving up to 90%.
But don't take our word for it. The following examples are unedited and generated by just providing the text to be summarized.
As you can see from those examples, our Text Summarizer summarizes long-form text with a high degree of accuracy.
Using this solution takes one click to deploy, and gives you an HTTP endpoint and playground to send any length of text to receive high quality summaries back. You can even customize the degree of summarization, allowing for very compressed summaries, slightly abbreviated summaries, and everything in between.
Text Summarizer is one of many Forefront solutions you can access on our platform. It's deployed to dedicated replicas giving you unlimited access at a flat price, so you can scale to high usage without breaking the bank. Start using Text Summarizer
Forefront now offers GPT-J powered blog outlines, allowing users to generate well-structured blog outlines from just a title.
Advances in natural language processing (NLP) technology have been coming at a fast pace over the past few years, allowing for new solutions to previously difficult problems. GPT-J is a recent development by the open source community, EleutherAI, and at Forefront, we have expanded its already vast capabilities by releasing specialized pre-trained models to common NLP problems.
Generating blog outlines has many valuable uses in online digital marketing, SEO optimization, advertising, and journalism. The ability to automatically outline blogs from just a title opens up many opportunities to thousands of businesses.
Previously, typical machine learning techniques involving natural language processing didn't perform well on logical tasks like this. At Forefront, we have pioneered new techniques that allow for well-structured blog outlines to be written provided just a title.
But don't take our word for it. The following examples are unedited and generated by just providing a topic.
As you can see from those examples, our Blog Outliner creates cohesive outlines for blog posts from just a title.
Using this solution takes one click to deploy, and gives you an HTTP endpoint and playground to send a topic and bullet point to receive high quality summaries back.
Blog Outliner is one of many Forefront solutions you can access on our platform. It's deployed to dedicated replicas giving you unlimited access at a flat price, so you can scale to high usage without breaking the bank. Start using Blog Outliner
Forefront now offers GPT-J powered paragraph generation, allowing users to generate paragraphs from just a few words.
Advances in natural language processing (NLP) technology have been coming at a fast pace over the past few years, allowing for new solutions to previously difficult problems. GPT-J is a recent development by the open source community, EleutherAI, and at Forefront, we have expanded its already vast capabilities by releasing specialized pre-trained models to common NLP problems.
Generating paragraphs has many valuable uses in online digital marketing, SEO optimization, advertising, and journalism. The ability to automatically create entire paragraphs from a few words opens up many opportunities to thousands of businesses.
Previously, typical machine learning techniques involving natural language processing could only generate a few sentences while degrading in quality quickly. At Forefront, we have pioneered new techniques that allow for coherent paragraphs to be written from a few words.
But don't take our word for it. The following examples are unedited and generated by just providing a topic and bullet point.
As you can see from those examples, our Paragraph Generator creates cohesive paragraphs in from just a few words.
Using this solution takes one click to deploy, and gives you an HTTP endpoint and playground to send a topic and bullet point to receive high quality summaries back.
Paragraph Generator is one of many Forefront solutions you can access on our platform. It's deployed to dedicated replicas giving you unlimited access at a flat price, so you can scale to high usage without breaking the bank. Start using Paragraph Generator
Forefront now offers prebuilt natural language processing (NLP) solutions to common NLP tasks including search, question answering, summarization, long-form content generation, paraphrasing, emotion classification, and more, enabling businesses to enhance their products to provide more value to customers with little to no development required.
Forefront started a few months ago by providing the best speed, cost, and fine-tuning experience for GPT-J available. Since then we have seen many businesses trying to solve similar problems.
Lots of businesses have large internal document sets that can be cumbersome to navigate. Many organizations have huge stores of video or audio transcripts that they'd like to gain insights from. Even more companies want to generate content or alter existing content without spending hours of manual human labor.
So we have decided to build solutions to these (and many other) problems, empowering businesses to easily build powerful products and tools. Today, we're excited to announce the release of the following solutions.
Search documents and text of any size with natural language questions, get relevant results, and receive natural language answers to your questions.
Provide a title to get cohesive long-form blog posts on any topic.
Paraphrase any length text with just a single HTTP request.
Get high quality summaries of any length transcript or dialogue at different levels of information compression.
Classify a message, post, review, etc. into one of 28 possible emotions.
Given a title for a blog post, receive a high quality outline for the post.
Get high quality summaries of any length of text from blogs to books at different levels of information compression.
With a topic and a bullet point, expand the bullet point into a paragraph.
We're looking forward to seeing all the things our customers do with these solutions along with the NLP solutions we will continue to build and make available on the platform. You can expect many solutions to be added every month. Start using solutions
Read more:
Forefront now offers GPT-J powered natural language search & question answering, allowing users to search any amount of documents or text with questions and receive natural language answers.
Advances in natural language processing (NLP) technology have been coming at a fast pace over the past few years, allowing for new solutions to previously difficult problems. GPT-J is a recent development by the open source community, EleutherAI, and at Forefront, we have expanded its already vast capabilities by releasing specialized pre-trained models to common NLP problems.
Performing natural language search on large internal document sets provides value to any company with lots of internal information. Onboarding new employees, efficiently spreading knowledge to existing employees, and even enhancing customer experiences can all be improved with natural language search and question answering.
Previous techniques for searching involved complicated keyword matching and indexing. New natural language processing methods allow for indexing based on the meaning of text rather than the exact words used. This means you can now find information that's related to what you're asking, even if what you're searching uses none of the same words as what you ought to find.
On top of this, Forefront has innovated on the end user experience by providing an easy method to get natural language answers to the questions you may have. Now when you have a question, you can get human-like answers along with the relevant references from your documents.
Here's some example question answering with relevant documents stored in Elasticsearch.
Using this solution takes one click to deploy and gives you an HTTP endpoint for text embeddings and question answering.
Search Q&A is one of many Forefront solutions you can access on our platform. It's deployed to dedicated replicas giving you unlimited access at a flat price, so you can scale to high usage without breaking the bank. Start using Question Answering
Forefront now offers GPT-J powered long-form content generation, allowing users to generate full length blog posts from just a title.
Advances in natural language processing (NLP) technology have been coming at a fast pace over the past few years, allowing for new solutions to previously difficult problems. GPT-J is a recent development by the open source community, EleutherAI, and at Forefront, we have expanded its already vast capabilities by releasing specialized pre-trained models to common NLP problems.
Generating full length blog posts has many valuable uses in online digital marketing, SEO optimization, advertising, and journalism. The ability to automatically create entire 1000+ word blog posts in seconds opens up many opportunities to thousands of businesses.
Previously, typical machine learning techniques involving natural language processing could only generate a few sentences or paragraphs before it degrades in quality. We at Forefront have pioneered new techniques that allow for coherent and cohesive blog posts powered by several GPT-J models in tandem.
But don't take our word for it. The following examples are unedited and generated by just providing a title for the blog post.
As you can see from those examples, Forefront's specialized GPT-J solution generates long-form content that is structured, coherent, and human-like.
Using this solution takes one click to deploy, and gives you an HTTP endpoint and playground to send a blog title to receive a long-form blog post back.
Blog Generator is one of many Forefront solutions you can access on our platform. It's deployed to dedicated replicas giving you unlimited access at a flat price, so you can scale to high usage without breaking the bank. Start using Blog Generator
Forefront now offers long-form content paraphrasing, allowing users to rewrite any length of text in a single request.
Advances in natural language processing (NLP) technology have been coming at a fast pace over the past few years, allowing for new solutions to previously difficult problems. GPT-J is a recent development by the open source community, EleutherAI, and at Forefront, we have expanded its already vast capabilities by releasing specialized pre-trained models to common NLP problems.
Rewriting long-form content has many uses in marketing and sales. Paraphrasing extensive text with the same meaning opens up many opportunities for businesses in almost every field.
Previously this was done with rigid, human-defined rules that suffered on edge cases and often resulted in sub-par results. Large machine learning language models like GPT-J have shown great ability to excel at these edge cases, and when trained well, result in near human-level paraphrasing.
But don't take our word for it. The following examples are unedited and generated by just providing the text to be rewritten.
As you can see from those examples, our Rewriter paraphrases long-form content with a high degree of accuracy.
Using this solution takes one click to deploy, and gives you an HTTP endpoint and playground to send any length of text to receive high quality paraphrased text back.
Rewriter is one of many Forefront solutions you can access on our platform. It's also deployed on a dedicated replica giving you unlimited access at a flat price, so you can scale to high usage without breaking the bank. Start using Rewriter
Forefront now offers GPT-J powered dialogue summarization, allowing businesses to get high quality summaries for any length of dialogue in a single request.
Advances in natural language processing (NLP) technology have been coming at a fast pace over the past few years, allowing for new solutions to previously difficult problems. GPT-J is a recent development by the open source community, EleutherAI, and at Forefront, we have expanded its already vast capabilities by releasing specialized pre-trained models to common NLP problems.
Many businesses capture and use transcripts from video or audio as part of their products or internal processes, and this dialogue can be very long and difficult to maneuver. Providing high quality summaries of every part of the dialogue can open new doors to solving organizational and customer problems better than ever before.
Previously, summarizing long transcripts required humans spending many hours to accurately summarize dialogue making it impossible to scale without spending a lot of money. Large language models like GPT-J have opened the door to doing this task in an automated way, enabling businesses to summarize dialogue and transcripts 100x faster than previously possible while saving up to 90%.
But don't take our word for it. The following examples are unedited and generated by just providing the dialogue to be summarized.
As you can see from those examples, our Dialogue Summarizer summarizes long-form dialogue with a high degree of accuracy.
Using this solution takes one click to deploy, and gives you an HTTP endpoint and playground to send any length of dialogue to receive high quality summaries back. You can even customize the degree of summarization, allowing for very compressed summaries, slightly abbreviated dialogues, and everything in between.
Dialogue Summarizer is one of many Forefront solutions you can access on our platform. It's deployed to dedicated replicas giving you unlimited access at a flat price, so you can scale to high usage without breaking the bank. Start using Dialogue Summarizer
The newest GPT model, GPT-J, is making its rounds in the NLP community and bringing up some questions along the way. So the purpose of this article is to answer the question: What is GPT-J?
GPT-J-6B is an open source, autoregressive language model created by a group of researchers called EleutherAI. It's one of the most advanced alternatives to OpenAI's GPT-3 and performs well on a wide array of natural language tasks such as chat, summarization, and question answering, to name a few.
For a deeper dive, GPT-J is a transformer model trained using Ben Wang's Mesh Transformer JAX. "GPT" is short for generative pre-trained transformer, "J" distinguishes this model from other GPT models, and "6B" represents the 6 billion trainable parameters. Transformers are increasingly the model of choice for NLP problems, replacing recurring neural network (RNN) models such as long short-term memory (LSTM). The additional training parallelization allows training on larger datasets than was once possible.
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3.
GPT-J was trained on the Pile, a large-scale curated dataset created by EleutherAI.
GPT-J was trained for 402 billion tokens over 383,500 steps on a TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at generating text from a prompt due to the core functionality of GPT-J being to take a string of text and predict the next token. When prompting GPT-J it is important to remember that the statistically most likely next token is often the one that will be provided by the model.
GPT-J can perform various tasks in language processing without any further training, including tasks it was never trained for. It can be used to solve a lot of different use cases like language translation, code completion, chatting, blog post writing and many more. Through fine-tuning (discussed later), GPT-J can be further specialized on any task to significantly increase performance.
Let's look at some example tasks:
Chat
Open ended conversations with an AI support agent.
Q&A
Create question + answer structure for answering questions based on existing knowledge.
English to french
Translate English text into French.
Parse unstructured data
Create tables from long form text by specifying a structure and supplying some examples.
Translate SQL
Translate natural language to SQL queries.
Python to natural language
Explain a piece of Python code in human understandable language.
As you can tell, the standard GPT-J model adapts and performs well on a number of different NLP tasks. However, things get more interesting when you explore fine-tuning.
While the standard GPT-J model is proficient at performing many different tasks, the model's capabilities improve significantly when fine-tuned. Fine-tuning refers to the practice of further training GPT-J on a dataset for a specific task. While scaling parameters of transformer models consistently yields performance improvements, the contribution of additional examples of a specific task can greatly improve performance beyond what additional parameters can provide. Especially for use cases like classification, extractive question answering, and multiple choice, collecting a few hundred examples is often "worth" billions of parameters.
To see what fine-tuning looks like, here's a demo (2m 33s) on how to fine-tune GPT-J on Forefront. There's two variables to fine-tuning that, when done correctly, can lead to GPT-J outperforming GPT-3 Davinci (175B parameters) on a variety of tasks. Those variables are the dataset and training duration.
For a comprehensive tutorial on preparing a dataset to fine-tune GPT-J, check out our guide.
At a high level, the following best practices should be considered regardless of your task:
Let's look at some example datasets:
Classification
Classify customer support messages by topic.
Sentiment Analysis
Analyze sentiment for product reviews.
Idea Generation
Generate blog ideas given a company's name and product description.
The duration you should fine-tune for largely depends on your task and number of training examples in your dataset. For smaller datasets, fine-tuning 5-10 minutes for every 100kb is a good place to start. For larger datasets, fine-tuning 45-60 minutes for every 10MB is recommended. These are rough rules of thumb and more complex tasks will require longer training durations.
GPT-J is notoriously difficult and expensive to deploy in production. When considering deployment options there are two things to keep in mind: cost and response speeds. The most common hardware for deploying GPT-J is a T4, V100, or TPU, all of which come with less than ideal tradeoffs. At Forefront, we experienced these undesirable tradeoffs and started to experiment to see what we could about it. Several low-level machine code optimizations later, and we built a one-click GPT-J deployment, offering the best cost, performance, and throughput available. Here's a quadrant to compare the different deployment methods by cost and response speeds:
Large transformer language models like GPT-J are increasingly being used for a variety of tasks and further experimentation will inevitably lead to more use cases that these models prove to be effective at. At Forefront, we believe providing a simple experience to fine-tune and deploy GPT-J can help companies easily enhance their products with minimal work required. Start using GPT-J
After EleutherAI released their new language model, GPT-J-6B, it was clear that it would fill a much-needed gap in available language models. While some discounted the model due to its seemingly insignificant number of parameters compared to the 175B+ parameter models available from OpenAI and AI21, it has proven to offer advantages compared to its larger predecessors.
Some of these advantages are clear like the fact that it is an open-source model, and, consequently, you have complete control over the model and deploying to dedicated replicas. Others aren’t evident until you can fully experiment with GPT-J. While there are playgrounds available for people to get a feel of what GPT-J is about, none offer the parallel experience you could expect with GPT-3.
So today, we’re excited to announce the launch of our free, public GPT-J playground with all of the standard parameters you’d expect from other GPT alternatives and a list of fine-tuned models that will be available soon. With that said, this article will provide a tutorial with key concepts for our GPT-J playground.
Select model
First, select the model you’d like to use. We currently offer the standard GPT-J model, but we will be adding different fine-tuned models for people to experience the capabilities of fine-tuning.
Write your prompt
Next, write a prompt to receive a response from the model you’d like. It is best to tell the model what you would like to receive and show an example.
Adjust the parameters
Once your prompt is complete, you can customize the model parameters based on the task you are providing the model. More information on parameters are provided later in this guide.
Generate response
Finally, press ‘Submit’ to generate a response from the model.
The prompt is how you “program” the model to achieve the response you’d like. GPT-J can do everything from writing original stories to generating code. Because of its wide array of capabilities, you have to be explicit in showing it what you want. Telling and showing is the secret to a good prompt.
GPT-J tries to guess what you want from the prompt. If you write the prompt “Give me a list of fiction books” the model may not automatically assume you’re asking for a list of books. Instead, you could be asking the model to continue a conversation that starts with “Give me a list of fiction books” and continue to say “and I’ll tell you my favorite.”
There are three basic tips to creating prompts:
1. Check your settings
The temperature and top_p parameters are what you will typically be configuring based on the task. These parameters control how deterministic the model is in generating a response. A common mistake is assuming these parameters control “creativity”. For instance, if you're looking for a response that's not obvious, then you might want to set them higher. If you're asking it for a response where there's only one right answer, then you'd want to set them lower. More on GPT-J parameters later.
2. Show and tell
Make it clear what you want through a combination of instructions and examples. Back to our previous example, instead of:
“Give me a list of fiction books”
Do:
“Give me list of fiction books. Here’s an example list: Harry Potter, Game of Thrones, Lord of the Rings.”
3. Provide quality data
If you’re trying to classify text or get the model to follow a pattern, make sure that there are enough examples. Not only is providing sufficient examples important, but the examples should be proofread for spelling or grammatical errors. While the model is usually capable of seeing through simple errors, it may believe they are intentional.
Whitespace, or what happens when you press the Spacebar, can be a token or tokens depending on context. Make sure to never have trailing whitespace at the end of your prompt or else it can have unintended effects on the model’s response.
GPT-J understands and processes text by breaking it down into tokens. As a rough rule of thumb, 1 token is approximately 4 characters. For example, the word “television” gets broken up into the tokens “tele”, “vis” and “ion”, while a short and common word like “dog” is a single token. Tokens are important to understand because GPT-J, like other language models, have a maximum context length of 2048 tokens, or roughly 1500 words. The context length includes both the text prompt and generated response.
Parameters are different settings that control the way in which GPT-J responds. Becoming familiar with the following parameters will allow you to apply GPT-J to a number of different tasks.
Response length
Response length is the length of the generated text, in tokens, you’d like based on your prompt. A token is roughly 4 characters including alphanumerics and special characters.
Note that the max response length for GPT-J is 2048 tokens.
Temperature
Temperature controls the randomness of the generated text. A value of 0 makes the engine deterministic, which means that it will always generate the same output for a given input text. A value of 1 makes the engine take the most risks.
As a frame of reference, it is common for story completion or idea generation to see temperature values between 0.7 to 0.9.
Top-P
Top-P is an alternative way of controlling the randomness of the generated text. We recommend that only one of Temperature and Top P are used, so when using one of them, make sure that the other is set to 1.
A rough rule of thumb is that Top-P provides better control for applications in which GPT-J is expected to generate text with accuracy and correctness, while Temperature works best for those applications in which original, creative or even amusing responses are sought.
Top-K
Top-K sampling means sorting by probability and zero-ing out the probabilities for anything below the k'th token. A lower value improves quality by removing the tail and making it less likely to go off topic.
Repetition penalty
Repetition penalty works by lowering the chances of a word being selected again the more times that word has already been used. In other words, it works to prevent repetitive word usage.
Stop sequences
Stop sequences allow you to define one or more sequences that when generated force GPT-J to stop.
This provides a basic understanding of the key concepts to begin using our free GPT-J playground. As you begin to experiment and generate interesting or funny responses worth sharing, feel free to tweet them to us!
Fine-tuning is a powerful technique to create a new GPT-J model that is specific to your use case. When done correctly, fine-tuning GPT-J can achieve performance that exceeds significantly larger, general models like OpenAI's GPT-3 Davinci.
To fine-tune GPT-J on Forefront, all you need is a set of training examples formatted in a single text file with each example generally consisting of a single input example and its associated output. Fine-tuning can solve a variety of problems, and the optimal way to format your dataset will depend on your specific use case. Below, we'll list the most common use cases for fine-tuning GPT-J, corresponding guidelines, and example text files.
Before diving into the most common use cases, there are a few best practices that should be followed regardless of the specific use case:
Classification is the process of categorizing text into a group of words. In classification problems, each input in the prompt should be classified into one of your predefined classes.
Choose classes that map to a single token. At inference time, specify the parameter, length=1, since you only need the first token for classification.
Let's say you'd like to organize your customer support messages by topics. You may want to fine-tune GPT-J to filter incoming support so they can be routed appropriately.
The dataset might look like the following:
In the example above, we provided instructions for the model followed by the input containing the customer support message and the output to classify the message to the corresponding category. As a separator we used " <|endoftext|> " which clearly separated the different examples. The advantage of using " <|endoftext|> " as the separator is that the model natively uses it to indicate the end of a completion. It does not need to be set as a stop sequence either because the model automatically stops a completion before outputting " <|endoftext|> ".
Now we can query our model by making a Completion request.
Sentiment Analysis is the act of identifying and extracting opinions within a given text across blogs, reviews, social media, forums, news, etc. Let's say you'd like to get a degree to which a particular product review is positive or negative.
The dataset might look the following:
Now we can query our model by making a Completion request.
The purpose of a chatbot is to simulate human-like conversations with users via text message or chat. You could fine-tune GPT-J to imitate a specific person or respond in certain ways provided the context of a given conversation to use in a customer support situation. First, let's look at getting GPT-J to imitate Elon Musk.
The dataset might look like the following:
Here we purposefully left out separators to divide specific examples. Instead, you can opt for compiling long-form conversations when attempting to imitate a specific person since we want to capture a wide variety of responses in an open-ended format.
You could query the model by making a Completion request.
Notice that we provide "User:" and "Elon Musk:" as a stop sequence. It's important to anticipate how the model may continue to provide completions beyond the desired output and use stop sequences to stop the model from continuing. Given the pattern of the dataset where a User says something followed by Elon Musk, it makes sense to use "User:" and "Elon Musk:" as the stop sequence.
A similar but different chatbot use case would be that of a customer support bot. Here, we'll go back to providing specific examples with separators so the model can identify how to respond in different situations. Depending on your customer support needs, this use case could require a few thousand examples, as it will likely deal with different types of requests and customer issues.
The dataset might look like the following:
An optional improvement that could be made to the above dataset would be to provide more context and exchanges leading up to the resolution for each example. However, this depends on the role you're hoping to fill with your customer support chatbot.
Now we can query our model by making a Completion request.
As with the previous example, we're using "Customer:" and "#####" as stop sequences so the model stops after providing the relevant completion.
The main purpose of entity extraction is to extract information from given text to understand the subject, theme, or other pieces of information like names, places, etc. Let's say, you'd like to extract names from provided text.
The dataset might look like the following:
Now we can query our model by making a Completion request.
A common use case is to use GPT-J to generate ideas provided specific information. Whether it's copy for ads or websites, blog ideas, or products, generating ideas is a useful task for GPT-J. Let's look at the aforementioned use case of generating blog ideas.
The dataset might look like the following:
Now we can query our model by making a Completion request.
Following the above example on preparing a dataset for each use case should lead to well-performing fine-tuned GPT-J models when having a sufficient amount of examples (>100MB dataset). For datasets less than 100MB, it is recommended to also provide explicit instructions with each example like the following dataset for blog idea generation.
Generative Pre-trained Transformer (GPT) models, the likes of which GPT-J and GPT-3 belong to, have taken the NLP community by storm. These powerful language models excel at performing various NLP tasks like question-answering, entity extraction, categorization, and summarization without any supervised training. They require very few to no examples to understand a given task and outperform state-of-the-art models trained in a supervised fashion.
GPT-J is a 6-billion parameter transformer-based language model released by a group of AI researchers called EleutherAI in June 2021. The goal of the group since forming in July of 2020 is to open-source a family of models designed to replicate those developed by OpenAI. Their current focus is on the replication of the 175-billion parameter language model, GPT-3. But don’t let the difference in parameter size fool you. GPT-J outperforms GPT-3 in code generation tasks and, through fine-tuning, can outperform GPT-3 on a number of common natural language processing (NLP) tasks. The purpose of this article will be to outline an array of use cases that GPT-J can be applied to and excel at. For information on how to fine-tune GPT-J for any of the use cases below, check out our fine-tuning tutorial.
The most natural use case for GPT-J is generating code. GPT-J was trained on a dataset called the Pile, an 835GB collection of 22 smaller datasets—including academic sources (e.g., Arxiv, PubMed), communities (StackExchange, Wikipedia), code repositories (Github), and more. The addition of Github into the data has led to GPT-J outperforming GPT-3 on a variety of code generating tasks. While “vanilla” GPT-J is proficient at this task, it becomes even more capable when one fine-tunes the model on any given programming language.
To get started fine-tuning GPT-J for code generation, check out Hugging Face’s CodeSearchNet containing 2 million comment/code pairs from open-source libraries hosted on GitHub for Go, Java, Javascript, PHP, Python, and Ruby.
Input:
Output:
An increasingly common NLP use case is to build a chatbot. A chatbot is software that simulates human-like conversations with users via text message or chat. With its main commercial use case to help users by providing answers to their questions, chatbots are commonly used in a variety of customer support scenarios. However, chatbots can also be used to imitate specific people like Kanye West.
Regardless of your reason for using a chatbot, it is recommended to fine-tune GPT-J by providing transcripts of the specific task. For instance, let’s say you want a custom chatbot to assist with customer support requests. A simple method to curate a fine-tuning dataset would be to record transcripts of typical customer support exchanges between your team and customers. Somewhere in the order of one hundred or so examples would be enough for GPT-J to become proficient at your company’s specific customer support tasks.
Story writing is simply a work of fiction that is written in easily understandable grammatical structure with a natural flow of speech.
Story writing with GPT-J becomes interesting as one could fine-tune to a particular author’s writing style or book series. Imagine having a Stephen King writing bot or a bot that could help generate books 6 and 7 to Game of Thrones because, let’s be honest, George R.R. Martin is dragging his feet at this point.
Here’s an example of the begin ning to a fictitious piece written by GPT-J-6B:
The main purpose of entity extraction is to extract information from given text to understand the subject, theme, or other pieces of information like names, places, etc. Some interesting use cases for entity extraction include:
Financial market analysis: Extract key figures from financial news articles or documents to use as signals for trading algorithms or market intelligence
Email inbox optimization: Notify users of flight times, meeting locations, and credit card charges without having to open emails
Content recommendation: Extract information from articles and media to recommend content based on entity similarity and user preferences
GPT-J shines new light on entity extraction, providing a model that is adaptive to both general text and specialized documents through few-shot learning.
Summarization is the process of summarizing information in given text for quicker consumption without losing its original meaning. GPT-J is quite proficient out-of-the-box at summarization. What follows is an example of taking a snippet of the Wikipedia article for Earth and tasking GPT-J to provide a short summary.
Input:
Output:
Not to be confused with summarization, paraphrasing is the process of rewriting a passage without changing the meaning of the original text. Where summarization attempts to condense information, paraphrasing rewords the given information. While GPT-J is capable of summarization out-of-the-box, paraphrasing with GPT-J is best achieved through fine-tuning. Here is an example of paraphrasing a shorter snippet from the same Earth Wikipedia article in the previous summarization example after training on hand-written paraphrasing examples.
Input:
Output:
A widely used commercial use case for GPT-J and other transformer-based language models is copywriting for websites, ads, and general marketing. Copywriting is a crucial marketing process to increase website, ad, and other conversion rates. Through fine-tuning GPT-J on a given company’s voice or previously successful ad campaigns, GPT-J can automatically provide effective copy at a fraction of the cost of hiring a copywriter.
Input:
Output:
Text classification is the process of categorizing text into organized groups. Unstructured text is everywhere, such as emails, text conversations, websites, and social media, and the first step in extracting value from this data is to categorize it into organized groups. This is another use case where fine-tuning GPT-J will lead to the best performance. By providing one hundred examples or more of your given classification task, GPT-J can perform as good or better than the largest language models available like OpenAI’s GPT-3 Davinci.
Sentiment analysis is the act of identifying and extracting opinions within given text like blogs, reviews, social media, forums, new, etc. Perhaps you’d like to automatically analyze thousands of reviews about your products to discover if customers are happy about your pricing plans or gauge brand sentiment on social media in real-time so you can detect disgruntled customers and immediately respond, the applications of sentiment analysis are endless and applicable to any type of business.
Given the infancy of large transformer-based language models, further experimentation will inevitably lead to more use cases that these models prove to be effective at. As you may have noticed, a number of the use cases are the result of fine-tuning GPT-J. At Forefront, we believe the discovery of more use cases will not only come from increased usage of these models, but by providing a simple experience to fine-tune that allows for easy experimentation and quick feedback loops. For a tutorial on easily fine-tuning GPT-J on Forefront, check out our recent tutorial.
Recent research in Natural Language Processing (NLP) has led to the release of multiple large transformer-based language models like OpenAI’s GPT-[2,3], EleutherAI’s GPT-[Neo, J], and Google’s T5. For those not impressed by the leap of tunable parameters in the billions, the ease with which these models could perform on a never before seen task without training a single epoch is something to behold. While it has become evident that the more parameters a model has the better it will generally perform, an exception to this rule applies when one explores fine-tuning. Fine-tuning refers to the practice of further training transformer-based language models on a dataset for a specific task. This practice has led to the 6 billion parameter GPT-J outperforming the 175 billion GPT-3 Davinci on a number of specific tasks. As such, fine-tuning will continue to be the modus operandi when using language models in practice, and, consequently, fine-tuning is the main focus of this post. Specifically, how to fine-tune the open-source GPT-J-6B.
The first step in fine-tuning GPT-J is to curate a dataset for your specific task. The specific task for this tutorial will be to imitate Elon Musk. To accomplish this, we compiled podcast transcripts of Elon’s appearances on the Joe Rogan Experience and Lex Fridman Podcast into a single text file. Here’s the text file for reference. Note that the size of the file is only 150kb. When curating a dataset for fine-tuning, the main focus should be to encapsulate an evenly-distributed sample of the given task instead of prioritizing raw size of the data. In our case, these podcast appearances of Elon were great as they encompass multiple hours of him speaking on a variety of different topics.
If you plan on fine-tuning on a dataset of 100MB or greater, get in touch with our team before beginning. For more information on preparing your dataset, check out our guide.
Believe it or not, once you have your dataset, the hard part is done since Forefront abstracts all of the actual fine-tuning complexity away. Let’s go over the remaining steps to train your fine-tuned model.
Create deployment
Once logged in, click “New deployment”.
Select Fine-tuned GPT-J
From here, we’ll add a name and optional description for the deployment then select "Fine-tuned GPT-J".
Upload dataset
Then, we’ll upload our dataset in the form of a single text file. Again, if the dataset is 100MB or greater, get in touch with our team.
Set training duration
A good rule of thumb for smaller datasets is to train 5-10 minutes every 100kb. For text files in the order of megabytes, you’ll want to train 45-60 minutes for every 10MB.
Set number of checkpoints
A checkpoint is a saved model version that you can deploy. You’ll want to set a number of checkpoints that evenly divides the training duration.
Add test prompts
Test prompts are prompts that every checkpoint will automatically provide completions for so you can compare the performance of the different models. Test prompts should be pieces of text that are not found in your training text file. This allows you to see how good the model is at understanding your topic and prevents the model from regurgitating information it has seen in your training set.
You can also customize model parameters for your specific task.
Once your test prompts are set, you can press 'Fine-tune' and your fine-tuned model will begin training. You may notice the estimated completion time is longer than your specified training time. This is because it takes time to load the base weights prior to training.
View test prompts
As checkpoints being to appear, you can press 'View test prompts' to start comparing performance between your different checkpoints.
Deploy to Playground and integrate in application
Now for the fun part: deploying your best-performing checkpoint(s) for further testing in the Playground or integration into your app.
To see how simple it is to use the Playground and integrate your GPT-J deployment into your app, check out our tutorial on deploying standard GPT-J.
Using Forefront isn’t the only way to fine-tune GPT-J. For a tutorial on fine-tuning GPT-J by yourself, check out Eleuther’s guide. However, it’s important to note that not only do you save time by fine-tuning on Forefront, but it’s absolutely free—saving you $8 per hour of training. Also, when you go to deploy your fine-tuned model you save up to 33% on inference costs with increased throughput by deploying on Forefront.
These tips are meant as loose guidelines and experimentation is encouraged.
At Forefront, we believe building a simple experience for fine-tuning can increase experimentation with quicker feedback loops so companies and individuals can apply language models to a myriad problems. If you have any ideas on how we can further improve the fine-tuning experience, please get in touch with our team.
More than one year has passed since the public release of OpenAI's API for GPT-3. Since then, thousands of developers and hundreds of companies have started building on the platform to apply the transformer-based language model to a variety of NLP problems.
In its wake, EleutherAI, a team of AI researchers open-sourcing their work, released their first implementation of a GPT-like system, the 2.7B parameter GPT-Neo, and most recently, the 6B parameter GPT-J. Before getting into GPT-J deployments, let's understand why a company or developer would use GPT-J in the first place.
So why would one prefer to use the open-source 6B parameter GPT-J over the 175B parameter GPT-3 Davinci? The answer comes down to cost and performance.
First, let's talk about cost. With GPT-3, you pay per 1000 tokens. For the unacquainted, you can think of tokens as pieces of words, where 1000 tokens are about 750 words. So with GPT-3, your costs scale directly with usage. On the other end, the open-sourced GPT-J can be deployed to cloud infrastructure enabling you to effectively get unlimited usage while only incurring the cost of the cloud hardware hosting the model.
Now let's talk about performance. "Bigger is better" has become an adage for a reason, and transformer-based language models are no exception. While a 100B parameter transformer model will always generally outperform a 10B parameter one, the keyword is generally. Unless you're trying to solve general artificial intelligence, you probably have a specific use case in mind. This is where fine-tuning GPT-J, or specializing the model on a dataset for a specific task, can lead to better performance than GPT-3 Davinci.
Now that we've discussed why one would use GPT-J over GPT-3 to lower costs at scale and achieve better performance on specific tasks, we'll discuss how to deploy GPT-J.
For this tutorial, we'll be deploying the standard GPT-J-6B.
Create deployment
Once logged in, you can click "New deployment".
Select Vanilla GPT-J
From here, add a name and optional description for your deployment then select "Vanilla GPT-J".
Press "Deploy"
Navigate to your newly created deployment, and press "Deploy" to deploy your Vanilla GPT-J model.
Replica count
From your deployment, you can control the replica count for your deployments as usage increases to maintain fast response speeds at scale.
Inferencing
To begin inferencing, copy the URL under the name and refer to our docs on a full set of instructions for passing requests and receiving responses.
You can expect all the parameters you'd typically use with GPT-3 like response length, temperature, top P, top K, repetition penalty, and stop sequences.
Playground
You can also navigate to Playground to experiment with your new GPT-J deployment without needing to use Postman or implement any code.
Deploying GPT-J on Forefront takes only a few minutes. On top of the simplicity we bring to the deployment process, we've made several low-level machine code optimizations enabling your models to run at a fraction of the cost compared to deploying on Google's TPU v2 with no loss in throughput. Start using GPT-J