Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 93a556e

Browse filesBrowse files
dehidehidehiKe30
andauthored
Implemented Chat Completions endpoint (TheoKanning#135)
* WIP: Added ChatCompletion POJOs with documentation * fix: switch enum to string enum * Added chat completions implementation * doc: updated README * fix: chat completions all test passing * doc: adding color coding to java examples * doc: minor version bump * Added undocumented field in ChatCompletionResult Added field which is present in response but not documented (as of today) in https://platform.openai.com/docs/api-reference/chat/create --------- Co-authored-by: Ke30 <email@email.com>
1 parent 3649c69 commit 93a556e
Copy full SHA for 93a556e

File tree

Expand file treeCollapse file tree

14 files changed

+316
-15
lines changed
Filter options
Expand file treeCollapse file tree

14 files changed

+316
-15
lines changed

‎README.md

Copy file name to clipboardExpand all lines: README.md
+14-12Lines changed: 14 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -14,17 +14,18 @@ Includes the following artifacts:
1414
as well as an example project using the service.
1515

1616
## Supported APIs
17-
- [Models](https://beta.openai.com/docs/api-reference/models)
18-
- [Completions](https://beta.openai.com/docs/api-reference/completions)
19-
- [Edits](https://beta.openai.com/docs/api-reference/edits)
20-
- [Embeddings](https://beta.openai.com/docs/api-reference/embeddings)
21-
- [Files](https://beta.openai.com/docs/api-reference/files)
22-
- [Fine-tunes](https://beta.openai.com/docs/api-reference/fine-tunes)
23-
- [Images](https://beta.openai.com/docs/api-reference/images)
24-
- [Moderations](https://beta.openai.com/docs/api-reference/moderations)
17+
- [Models](https://platform.openai.com/docs/api-reference/models)
18+
- [Completions](https://platform.openai.com/docs/api-reference/completions)
19+
- [Chat Completions](https://platform.openai.com/docs/api-reference/chat/create)
20+
- [Edits](https://platform.openai.com/docs/api-reference/edits)
21+
- [Embeddings](https://platform.openai.com/docs/api-reference/embeddings)
22+
- [Files](https://platform.openai.com/docs/api-reference/files)
23+
- [Fine-tunes](https://platform.openai.com/docs/api-reference/fine-tunes)
24+
- [Images](https://platform.openai.com/docs/api-reference/images)
25+
- [Moderations](https://platform.openai.com/docs/api-reference/moderations)
2526

2627
#### Deprecated by OpenAI
27-
- [Engines](https://beta.openai.com/docs/api-reference/engines)
28+
- [Engines](https://platform.openai.com/docs/api-reference/engines)
2829

2930
## Importing
3031

@@ -54,7 +55,7 @@ and set your converter factory to use snake case and only include non-null field
5455
If you're looking for the fastest solution, import the `service` module and use [OpenAiService](client/src/main/java/com/theokanning/openai/OpenAiService.java).
5556

5657
> ⚠️The OpenAiService in the client module is deprecated, please switch to the new version in the service module.
57-
```
58+
```java
5859
OpenAiService service = new OpenAiService("your_token");
5960
CompletionRequest completionRequest = CompletionRequest.builder()
6061
.prompt("Somebody once told me the world is gonna roll me")
@@ -67,7 +68,8 @@ service.createCompletion(completionRequest).getChoices().forEach(System.out::pri
6768
### Customizing OpenAiService
6869
If you need to customize OpenAiService, create your own Retrofit client and pass it in to the constructor.
6970
For example, do the following to add request logging (after adding the logging gradle dependency):
70-
```
71+
72+
```java
7173
ObjectMapper mapper = defaultObjectMapper();
7274
OkHttpClient client = defaultClient(token, timeout)
7375
.newBuilder()
@@ -84,7 +86,7 @@ OpenAiService service = new OpenAiService(api);
8486

8587
## Running the example project
8688
All the [example](example/src/main/java/example/OpenAiApiExample.java) project requires is your OpenAI api token
87-
```
89+
```bash
8890
export OPENAI_TOKEN="sk-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
8991
./gradlew example:run
9092
```

‎api/src/main/java/com/theokanning/openai/completion/CompletionResult.java

Copy file name to clipboardExpand all lines: api/src/main/java/com/theokanning/openai/completion/CompletionResult.java
+1Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@
22

33
import com.theokanning.openai.Usage;
44
import lombok.Data;
5+
import lombok.NoArgsConstructor;
56

67
import java.util.List;
78

+24Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
package com.theokanning.openai.completion.chat;
2+
import lombok.Data;
3+
4+
/**
5+
* A chat completion generated by GPT-3.5
6+
*/
7+
@Data
8+
public class ChatCompletionChoice {
9+
10+
/**
11+
* This index of this completion in the returned list.
12+
*/
13+
Integer index;
14+
15+
/**
16+
* The {@link ChatMessageRole#assistant} message which was generated.
17+
*/
18+
ChatMessage message;
19+
20+
/**
21+
* The reason why GPT-3 stopped generating, for example "length".
22+
*/
23+
String finishReason;
24+
}
+87Lines changed: 87 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,87 @@
1+
package com.theokanning.openai.completion.chat;
2+
3+
import lombok.Builder;
4+
import lombok.Data;
5+
6+
import java.util.List;
7+
import java.util.Map;
8+
9+
@Data
10+
@Builder
11+
public class ChatCompletionRequest {
12+
13+
/**
14+
* ID of the model to use. Currently, only gpt-3.5-turbo and gpt-3.5-turbo-0301 are supported.
15+
*/
16+
String model;
17+
18+
/**
19+
* The messages to generate chat completions for, in the <a
20+
* href="https://platform.openai.com/docs/guides/chat/introduction">chat format</a>.<br>
21+
* see {@link ChatMessage}
22+
*/
23+
List<ChatMessage> messages;
24+
25+
/**
26+
* What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower
27+
* values like 0.2 will make it more focused and deterministic.<br>
28+
* We generally recommend altering this or top_p but not both.
29+
*/
30+
Double temperature;
31+
32+
/**
33+
* An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens
34+
* with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.<br>
35+
* We generally recommend altering this or temperature but not both.
36+
*/
37+
Double topP;
38+
39+
/**
40+
* How many chat completion chatCompletionChoices to generate for each input message.
41+
*/
42+
Integer n;
43+
44+
/**
45+
* If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only <a
46+
* href="https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format">server-sent
47+
* events</a> as they become available, with the stream terminated by a data: [DONE] message.
48+
*/
49+
Boolean stream;
50+
51+
/**
52+
* Up to 4 sequences where the API will stop generating further tokens.
53+
*/
54+
List<String> stop;
55+
56+
/**
57+
* The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will
58+
* be (4096 - prompt tokens).
59+
*/
60+
Integer maxTokens;
61+
62+
/**
63+
* Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far,
64+
* increasing the model's likelihood to talk about new topics.
65+
*/
66+
Double presencePenalty;
67+
68+
/**
69+
* Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far,
70+
* decreasing the model's likelihood to repeat the same line verbatim.
71+
*/
72+
Double frequencyPenalty;
73+
74+
/**
75+
* Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100
76+
* to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will
77+
* vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100
78+
* should result in a ban or exclusive selection of the relevant token.
79+
*/
80+
Map<String, Integer> logitBias;
81+
82+
83+
/**
84+
* A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse.
85+
*/
86+
String user;
87+
}
+43Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
package com.theokanning.openai.completion.chat;
2+
import com.theokanning.openai.Usage;
3+
import lombok.Data;
4+
5+
import java.util.List;
6+
7+
/**
8+
* Object containing a response from the chat completions api.
9+
*/
10+
@Data
11+
public class ChatCompletionResult {
12+
13+
/**
14+
* Unique id assigned to this chat completion.
15+
*/
16+
String id;
17+
18+
/**
19+
* The type of object returned, should be "chat.completion"
20+
*/
21+
String object;
22+
23+
/**
24+
* The creation time in epoch seconds.
25+
*/
26+
long created;
27+
28+
/**
29+
* The GPT-3.5 model used.
30+
*/
31+
String model;
32+
33+
/**
34+
* A list of all generated completions.
35+
*/
36+
List<ChatCompletionChoice> choices;
37+
38+
/**
39+
* The API usage for this request.
40+
*/
41+
Usage usage;
42+
43+
}
+29Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
package com.theokanning.openai.completion.chat;
2+
3+
import lombok.AllArgsConstructor;
4+
import lombok.Builder;
5+
import lombok.Data;
6+
import lombok.NoArgsConstructor;
7+
8+
/**
9+
* <p>Each object has a role (either “system”, “user”, or “assistant”) and content (the content of the message). Conversations can be as short as 1 message or fill many pages.</p>
10+
* <p>Typically, a conversation is formatted with a system message first, followed by alternating user and assistant messages.</p>
11+
* <p>The system message helps set the behavior of the assistant. In the example above, the assistant was instructed with “You are a helpful assistant.”<br>
12+
* The user messages help instruct the assistant. They can be generated by the end users of an application, or set by a developer as an instruction.<br>
13+
* The assistant messages help store prior responses. They can also be written by a developer to help give examples of desired behavior.
14+
* </p>
15+
*
16+
* see <a href="https://platform.openai.com/docs/guides/chat/introduction">OpenAi documentation</a>
17+
*/
18+
@Data
19+
@NoArgsConstructor
20+
@AllArgsConstructor
21+
public class ChatMessage {
22+
23+
/**
24+
* Must be either 'system', 'user', or 'assistant'.<br>
25+
* You may use {@link ChatMessageRole} enum.
26+
*/
27+
String role;
28+
String content;
29+
}
+20Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
package com.theokanning.openai.completion.chat;
2+
3+
/**
4+
* see {@link ChatMessage} documentation.
5+
*/
6+
public enum ChatMessageRole {
7+
SYSTEM("system"),
8+
USER("user"),
9+
ASSISTANT("assistant");
10+
11+
private final String value;
12+
13+
ChatMessageRole(final String value) {
14+
this.value = value;
15+
}
16+
17+
public String value() {
18+
return value;
19+
}
20+
}

‎client/src/main/java/com/theokanning/openai/OpenAiApi.java

Copy file name to clipboardExpand all lines: client/src/main/java/com/theokanning/openai/OpenAiApi.java
+5Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,8 @@
22

33
import com.theokanning.openai.completion.CompletionRequest;
44
import com.theokanning.openai.completion.CompletionResult;
5+
import com.theokanning.openai.completion.chat.ChatCompletionRequest;
6+
import com.theokanning.openai.completion.chat.ChatCompletionResult;
57
import com.theokanning.openai.edit.EditRequest;
68
import com.theokanning.openai.edit.EditResult;
79
import com.theokanning.openai.embedding.EmbeddingRequest;
@@ -32,6 +34,9 @@ public interface OpenAiApi {
3234

3335
@POST("/v1/completions")
3436
Single<CompletionResult> createCompletion(@Body CompletionRequest request);
37+
38+
@POST("/v1/chat/completions")
39+
Single<ChatCompletionResult> createChatCompletion(@Body ChatCompletionRequest request);
3540

3641
@Deprecated
3742
@POST("/v1/engines/{engine_id}/completions")

‎client/src/main/java/com/theokanning/openai/OpenAiService.java

Copy file name to clipboardExpand all lines: client/src/main/java/com/theokanning/openai/OpenAiService.java
+6Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,8 @@
66
import com.fasterxml.jackson.databind.PropertyNamingStrategy;
77
import com.theokanning.openai.completion.CompletionRequest;
88
import com.theokanning.openai.completion.CompletionResult;
9+
import com.theokanning.openai.completion.chat.ChatCompletionRequest;
10+
import com.theokanning.openai.completion.chat.ChatCompletionResult;
911
import com.theokanning.openai.edit.EditRequest;
1012
import com.theokanning.openai.edit.EditResult;
1113
import com.theokanning.openai.embedding.EmbeddingRequest;
@@ -119,6 +121,10 @@ public Model getModel(String modelId) {
119121
public CompletionResult createCompletion(CompletionRequest request) {
120122
return api.createCompletion(request).blockingGet();
121123
}
124+
125+
public ChatCompletionResult createChatCompletion(ChatCompletionRequest request) {
126+
return api.createChatCompletion(request).blockingGet();
127+
}
122128

123129
/**
124130
* Use {@link OpenAiService#createCompletion(CompletionRequest)} and {@link CompletionRequest#model}instead
+40Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
package com.theokanning.openai;
2+
import com.theokanning.openai.completion.CompletionChoice;
3+
import com.theokanning.openai.completion.CompletionRequest;
4+
import com.theokanning.openai.completion.chat.ChatCompletionChoice;
5+
import com.theokanning.openai.completion.chat.ChatCompletionRequest;
6+
import com.theokanning.openai.completion.chat.ChatMessage;
7+
import com.theokanning.openai.completion.chat.ChatMessageRole;
8+
import org.junit.jupiter.api.Test;
9+
10+
import java.util.ArrayList;
11+
import java.util.HashMap;
12+
import java.util.List;
13+
14+
import static org.junit.jupiter.api.Assertions.assertEquals;
15+
16+
class ChatCompletionTest {
17+
18+
String token = System.getenv("OPENAI_TOKEN");
19+
OpenAiService service = new OpenAiService(token);
20+
21+
@Test
22+
void createChatCompletion() {
23+
final List<ChatMessage> messages = new ArrayList<>(); // java version agnostic
24+
final ChatMessage systemMessage = new ChatMessage(ChatMessageRole.SYSTEM.value(), "You are a dog and will speak as such.");
25+
messages.add(systemMessage);
26+
27+
ChatCompletionRequest chatCompletionRequest = ChatCompletionRequest
28+
.builder()
29+
.model("gpt-3.5-turbo")
30+
.messages(messages)
31+
.n(5)
32+
.maxTokens(50)
33+
.logitBias(new HashMap<>())
34+
.build();
35+
36+
List<ChatCompletionChoice> choices = service.createChatCompletion(chatCompletionRequest).getChoices();
37+
assertEquals(5, choices.size());
38+
39+
}
40+
}

‎client/src/test/java/com/theokanning/openai/CompletionTest.java

Copy file name to clipboardExpand all lines: client/src/test/java/com/theokanning/openai/CompletionTest.java
+1-2Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,7 @@
77
import java.util.HashMap;
88
import java.util.List;
99

10-
import static org.junit.jupiter.api.Assertions.assertEquals;
11-
import static org.junit.jupiter.api.Assertions.assertFalse;
10+
import static org.junit.jupiter.api.Assertions.*;
1211

1312

1413
public class CompletionTest {

‎gradle.properties

Copy file name to clipboardExpand all lines: gradle.properties
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
GROUP=com.theokanning.openai-gpt3-java
2-
VERSION_NAME=0.10.0
2+
VERSION_NAME=0.10.1
33

44
POM_URL=https://github.com/theokanning/openai-java
55
POM_SCM_URL=https://github.com/theokanning/openai-java

‎service/src/main/java/com/theokanning/openai/service/OpenAiService.java

Copy file name to clipboardExpand all lines: service/src/main/java/com/theokanning/openai/service/OpenAiService.java
+6Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,8 @@
1010
import com.theokanning.openai.OpenAiHttpException;
1111
import com.theokanning.openai.completion.CompletionRequest;
1212
import com.theokanning.openai.completion.CompletionResult;
13+
import com.theokanning.openai.completion.chat.ChatCompletionRequest;
14+
import com.theokanning.openai.completion.chat.ChatCompletionResult;
1315
import com.theokanning.openai.edit.EditRequest;
1416
import com.theokanning.openai.edit.EditResult;
1517
import com.theokanning.openai.embedding.EmbeddingRequest;
@@ -85,6 +87,10 @@ public Model getModel(String modelId) {
8587
public CompletionResult createCompletion(CompletionRequest request) {
8688
return execute(api.createCompletion(request));
8789
}
90+
91+
public ChatCompletionResult createChatCompletion(ChatCompletionRequest request) {
92+
return execute(api.createChatCompletion(request));
93+
}
8894

8995
public EditResult createEdit(EditRequest request) {
9096
return execute(api.createEdit(request));

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.