Chapter 3: GPT-3 and Programming
Almost all of GPT-3’s NLP capabilities are created in the Python programming language. But to enable wider accessibility, the API comes with pre-built support for all the major programming languages, so users can build GPT-3 powered applications using the programming language of their choice.
In this section, we will illustrate how this works by replicating an example with different programming languages.
Just a heads-up: In each language-specific chapter, we assume you have a basic understanding of the programming language being discussed. If you don’t, you can safely skip the section.
How to use OpenAI API with Python?
Python is the most popular language for data science and machine learning tasks. Compared to conventional data-science programming languages like R and Stata, Python shines because it’s scalable and integrates well with databases. It is widely used and has a flourishing community of developers keeping its ecosystem up to date. Python is easy to learn and comes with useful data science libraries like Numpy and Pandas.
You can pair GPT-3 with Python using a library called Chronology that provides a simple, intuitive interface. Chronology can mitigate the monotonous work of writing all of your code from scratch every time. Its features include:
        It calls the OpenAI API asynchronously, allowing you to generate multiple prompt completions at the same time.
        You can create and modify training prompts easily; for example, modifying a training prompt used by a different example is fairly straightforward.
       It allows you to chain prompts together by plugging the output of one prompt into another.
Chronology is hosted on PyPI and supports Python 3.6 and above. To install the library, you can run the following command:
(base) PS D:GPT-3 Python> pip install chronological
After installing the Python library via PyPI, let’s look at an example of how to prime GPT-3 to summarize a given text document at a second-grade reading level. We’ll show you to call the API, send the training prompt as a request, and get the summarized completion as an output. We’ve posted the code for you in a Github repository..
In this example, we will use the following training prompt:
My second-grader asked me what this passage means:
"""
Olive oil is a liquid fat obtained from olives (the fruit of Olea europaea; family Oleaceae)...
"""
I rephrased it for him, in plain language a second grader can understand:
"""
First, import the following dependencies:
# Importing Dependencies
from chronological import read_prompt, cleaned_completion, main
Now we can create a function that reads the training prompt and provides the completion output. We have made this function asynchronous, which allows us to carry out parallel function calls. We will use the following configuration for the API parameters:
       Maximum tokens=100
       Execution Engine="Davinci"
       Temperature=0.5
       Top-p=1
       Frequency Penalty = 0.2
       Stop Sequence = [" "]
# Takes in the training prompt and returns the completed response
async def summarization_example():  
    # Takes in a text file(summarize_for_a_2nd_grader) as the input prompt
   prompt_summarize = read_prompt('summarize_for_a_2nd_grader')    
    # Calling the completion method along with the specific GPT-3 parameters
    completion_summarize = await cleaned_completion(prompt_summarize, max_tokens=100, engine="davinci", temperature=0.5, top_p=1, frequency_penalty=0.2, stop=[" "])
   # Return the completion response
   return completion_summarize
Now we can create an asynchronous workflow, invoke that workflow using the ‘main’ function provided by the library, and print the output in the console:
# Designing the end-to-end async workflow, capable of running multiple prompts in parallel 
async def workflow():
    # Making async call to the summarization function
   text_summ_example = await summarization_example()
    # Printing the result in console
   print('-------------------------')
   print('Basic Example Response: {0}'.format(text_summ_example))
   print('-------------------------')
# invoke Chronology by using the main function to run the async workflow
main(workflow)
Save it as a Python script with the following name ‘text_summarization.py’ and run it from the terminal to generate the output. You can run the following command from your root folder:
(base) PS D:GPT-3 Python> python text_summarization.py
Once you execute the script, your console should print  the following summary of the prompt:
-------------------------
Basic Example Response: Olive oil is a liquid fat that comes from olives. Olives grow on a tree called an olive tree. The olive tree is the most common tree in the Mediterranean. People use the oil to cook with, to put on their salads, and as a fuel for lamps.
-------------------------
If you are not well versed in Python and want to chain different prompts without writing code, you can use the no-code interface built on top of the Chronology library to create the prompt workflow using drag-and-drop. See our GitHub repository for more examples of how you can use Python programming to interact with GPT-3.
How to use OpenAI API with Go?
Go is an open-source programming language that incorporates elements from other languages to create a powerful, efficient, and user-friendly tool. Many developers refer to it as a modern version of C.
Go is the language of preference for building projects that require high security, high speed, and high modularity. This makes it an attractive option for many projects in the fintech industry. Key features of Go are as follows:
       Ease of use
       State-of-the-art productivity
       High-level efficiency Static typing
       Advanced performance for networking
       Full use of multi-core power
If you are completely new to Go and want to give it a try, you can follow the documentation to get started.
Once you are done with the installation and understand the basics of Go programming, you can follow these steps to u use the Go API wrapper for GPT-3. To learn more about creating Go modules, see this tutorial.
First, you’ll create a module to track and import code dependencies. Create and initialize the “gogpt” module using the following command:
D:GPT-3 Go> go mod init gogpt
After creating the “gogpt” module, let’s point it to this github repository to download the necessary dependencies and packages for working with the API. Use the following command:
D:GPT-3 Go> go get github.com/sashabaranov/go-gpt3
go get: added github.com/sashabaranov/go-gpt3 v0.0.0-20210606183212-2be4a268a894
We’ll use the same text summarization example as in the previous section. (You can find all the code at the following repository.)
Let’s import the necessary dependencies and packages for starters:
# Calling the package main
package main
# Importing Dependencies
import (
   "fmt"
   "io/ioutil"
   "context"
   gogpt "github.com/sashabaranov/go-gpt3"
)
Go programming organizes source files into system directories called packages, which make it easier to reuse code across Go applications. In the first line of the code we call the package "main" and tell the Go compiler that the package should compile as an executable program instead of a shared library.
NOTE: In Go, you create a package as a shared library for reusable code, and the "main" package for executable programs. The "main" function within the package serves as the entry point for the program.
Now you’ll create a main function that will host the entire logic of reading the training prompt and providing the completion output. Use the following configuration for the API parameters:
       Maximum tokens=100
       Execution Engine="davinci"
       Temperature=0.5
       Top-p=1
       Frequency Penalty = 0.2
       Stop Sequence = [" "]
func main() {
   c := gogpt.NewClient("OPENAI-API-KEY")
   ctx := context.Background()    
   prompt, err := ioutil.ReadFile("prompts/summarize_for_a_2nd_grader.txt")    
   req := gogpt.CompletionRequest{
       MaxTokens: 100,
       Temperature: 0.5,
       TopP: 1.0,
       Stop: []string{" "},
       FrequencyPenalty: 0.2,
       Prompt: string(prompt),
   }
    resp, err := c.CreateCompletion(ctx, "davinci", req)
   if err != nil {
       return
   }
   fmt.Println("-------------------------")
   fmt.Println(resp.Choices[0].Text)
   fmt.Println("-------------------------")
}
This code performs the following tasks:
  1. Sets up a new API client by providing it with the API token and then leaves it to run in the background.
  2. Reads the prompt “” in the form of a text file from the prompts folder.
  3. Creates a completion request by providing the training prompt and specifying the value API parameters (like temperature, top-p, stop sequence, and so forth).
  4. Calls the create completion function and provides it with the API client, completion request, and execution engine.
  5. Generates a response in the form of a completion, which prints towards the end in the console.
You can then save the code file as ‘text_summarization.go’ and run it from the terminal to generate the output. Use the following command to run the file from your root folder:
(base) PS D:GPT-3 Go> go run text_summarization.go
Once you execute the file, your console will print the following output:
-------------------------
Olive oil is a liquid fat that comes from olives. Olives grow on a tree called an olive tree. The olive tree is the most common tree in the Mediterranean. People use the oil to cook with, to put on their salads, and as a fuel for lamps.
-------------------------
For more examples of how you can use Go programming to interact with GPT-3, please visit our GitHub repository.
How to use OpenAI API with Java?
Java is one of the oldest and most popular programming languages for developing conventional software systems; it is also a platform that comes with a runtime environment. It was developed by Sun Microsystems (now a subsidiary of Oracle) in 1995, and as of today, more than 3 billion devices run on it. It is a general-purpose, class-based, object-oriented programming language designed to have fewer implementation dependencies. Its syntax is similar to that of C and C++. Two-thirds of the software industry still uses Java as its core programming language.
Let’s use our olive-oil text summarization example once more. As we did with Python and Go, we’ll show you how to call the API, send the training prompt as a request, and get the summarized completion as an output using Java.
For a step-by-step code walkthrough on your local machine, clone our GitHub repository. In the cloned repository go to Programming_with_GPT-3 folder and open the GPT-3_Java folder.
First, import all the relevant dependencies:
package example;
// Importing Dependencies
import java.util.*;  
import java.io.*;
import com.theokanning.openai.OpenAiService;
import com.theokanning.openai.completion.CompletionRequest;
import com.theokanning.openai.engine.Engine;
Now you’ll create a class named OpenAiApiExample. All of your code will be a part of it. Under that class, first create an OpenAiService object using the API token:
class OpenAiApiExample {
    public static void main(String... args) throws FileNotFoundException {
       String token = "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";
       OpenAiService service = new OpenAiService(token);
The connection to OpenAI API is now established in the form of a service object. Read the training prompt from the prompts folder:
// Reading the training prompt from the prompts folder
File file = new File("D:\GPT-3 Book\Programming with GPT-3\GPT-3
Java\example\src\main\java\example\prompts\summarize_for_a_2nd_grader.txt");
Scanner sc = new Scanner(file);
// we just need to use \Z as delimiter
sc.useDelimiter("\Z");
// pp is the string consisting of the training prompt
String pp = sc.next();
Then  you can create a completion request with the following configuration for the API parameters:
       Maximum tokens=100
       Execution Engine="Davinci"
       Temperature=0.5
       Top-p=1
       Frequency Penalty = 0.2
       Stop Sequence = [" "]
// Creating a list of strings to used as stop sequence
List<String> li = new ArrayList<String>();    
li.add(" '''");
// Creating a completion request with the API parameters
CompletionRequest completionRequest = CompletionRequest.builder().prompt(pp).maxTokens(100).temperature(0.5).topP(1.0).frequencyPenalty(0.2).stop(li).echo(true).build();
// Using the service object to fetch the completion response
service.createCompletion("davinci",completionRequest).getChoices().forEach(System.out::println);
Save the code file as ‘text_summarization.java’ and run it from the terminal to generate the output. You can use the following command to run the file from your root folder:
(base) PS D:GPT-3 Java> ./gradlew example:run
Your console should print the same summary as it did with the previous examples.For more examples of how you can use Java programming to interact with GPT-3, see our GitHub repository.
GPT-3 Sandbox Powered by Streamlit
In this section we will walk you through the GPT-3 Sandbox, an open-source tool we’ve created to help you turn your ideas into reality with just a few lines of Python code. We’ll show you how to use it and how to customize it for your specific application.
The goal of our sandbox is to empower you to create cool web applications, no matter what your technical background. It is built on top of the Streamlit framework.
To accompany this book, we have also created a video series with a step-by-step instructions for creating and deploying your GPT-3 application, which you can access by scanning the QR code in Figure 3-1.  Please follow it as you read this chapter.
Figure 3-1. QR code for GPT-3 Sandbox video series
We use VSCode as the IDE for our examples, but feel free to use any IDE. You’ll need to install the IDE before you start. Please also make sure you are running Python version 3.7 or above. You can confirm which version you have installed by running the following command:
python --version
Clone the code from this repository by opening a new terminal in your IDE and using the following command:
After cloning the repository the code structure in your IDE should now look as follows:
Figure 3-2. Sandbox file directory structure
Everything you need to create and deploy a web application is already present in the code. You just need to tweak a few files to customize the sandbox for your specific use case.
Create a Python virtual environment, which you’ll name env. Then you can install the required dependencies.
Go to the email_generation folder. Your path should look like this:
(env) kairos_gpt3GPT-3 Sandboxemail_generation>
From there, run the following command:
(env) kairos_gpt3GPT-3 Sandboxemail_generation> pip install -r requirements.txt
Now you can start customizing the sandbox code. The first file that you need to look at is training_data.py. Open that file and replace the default prompt with the training prompt you want to use. You can use the GPT-3 playground to experiment with different training prompts (see chapter 2 and our video for more on customizing the sandbox).
You’re now ready to tweak the API parameters (Maximum tokens, Execution Engine, Temperature, Top-p, Frequency Penalty, Stop Sequence) as per the requirements of your application use case. We recommend experimenting with different values of API parameters for a given training prompt in the playground to determine what values will work best for your use case. Once you get satisfactory results then you can alter the values in the training_service.py file.
That’s it! Your GPT-3 based web application is now ready. You can run it locally using the following command:
(env) kairos_gpt3GPT-3 Sandboxemail_generation> streamlit run gpt_app.py
Check to make sure it works, and then you can deploy the application to the internet using Streamlit sharing to showcase it to a wider audience. Our video offers a full deployment walkthrough.
Note: This application follows a simple workflow, where the training prompt receives single input from the UI and comes up with the response. If your application requires a more complex workflow, where the training prompt takes in multiple inputs, customize the UI elements by going through the scripts app1.py, app2.py, and gpt_app.py. For details, refer to the Streamlit documentation.
In the next few chapters, we will explore different applications of GPT-3 and leverage this sandbox to create easily deployable web applications.
Conclusion
In this chapter, we learned how to use the OpenAI API with the programming languages Python, Go, and Java. We also walked through a low-code sandbox environment created using Streamlit that will help you to quickly turn your idea into an application. Lastly, we looked at the key requirements to go live with a GPT-3 application. This chapter provided you with the programming outlook of the API; going forward we’ll dive deeper into the burgeoning ecosystem empowered by GPT-3.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset