OpenAI Proxy is a free proxy service that allows you to access the OpenAI API from anywhere in the world. It is a great way to bypass censorship and access OpenAI’s powerful AI tools.

I apologize for the confusion, but “openai.proxy” is not a term or component related to OpenAI. As of my knowledge, there is no specific information available about “openai.proxy” within the context of OpenAI’s products or services. It is possible that this term may be unrelated to OpenAI or might refer to something outside the scope of my current understanding. If you have any other questions or need assistance with any other topic related to OpenAI, please let me know, and I’ll be happy to help.

Reference Reading

Due to the dual restrictions of OpenAI and the GFW (Great Firewall), users in China are unable to access OpenAI’s API directly. However, a proxy service has been provided to developers for free.

โœ… Proxy Address: https://api.openai-proxy.com

๐Ÿ˜„ To use the proxy service in the domestic network environment, simply replace the domain name “api.openai.com” with “api.openai-proxy.com“. This proxy service supports all of OpenAI’s official interfaces, including SSE (Server-Sent Events) support.

  1. ๐Ÿ”‘ Obtaining an API Key Register for an OpenAI account and obtain your API key. The process is similar to the original OpenAI API.
  2. ๐Ÿž Testing the Proxy Service Replace “<your_openai_api_key>” in the following commands with your own API key:
See also  โ€œOpenAI Paraphrase: What It Is and How to Use It

Testing chat completion command

curl https://api.openai-proxy.com/v1/chat/completions
-H “Content-Type: application/json”
-H “Authorization: Bearer <your_openai_api_key>”
-d ‘{ “model”: “gpt-3.5-turbo”, “messages”: [{“role”: “user”, “content”: “Hello!”}] }’

Response example

{ “id”: “chatcmpl-21lvNzPaxlsQJh0BEIb9DqoO0pZUY”, “object”: “chat.completion”, “created”: 1680656905, “model”: “gpt-3.5-turbo-0301”, “usage”: { “prompt_tokens”: 10, “completion_tokens”: 10, “total_tokens”: 20 }, “choices”: [ { “message”: { “role”: “assistant”, “content”: “Hello there! How can I assist you today?” }, “finish_reason”: “stop”, “index”: 0 } ] }

Testing image generation command

curl https://api.openai-proxy.com/v1/images/generations
-H “Content-Type: application/json”
-H “Authorization: Bearer <your_openai_api_key>”
-d ‘{ “prompt”: “A bikini girl”, “n”: 2, “size”: “512×512” }’

Response example

{ “created”: 1680705608, “data”: [ { “url”: “https://oaidalleapiprodscus.blob.core.windows.net/private/org-xxxxxxx” }, { “url”: “https://oaidalleapiprodscus.blob.core.windows.net/private/org-xxxxxxx” } ] }

  1. ๐Ÿ’ฌ Continuous Conversation Mode To implement continuous conversation, you should follow the official documentation: https://platform.openai.com/docs/guides/chat/introduction

In Python, for example, you can create a chat completion with a series of messages like this:

Note: you need to be using OpenAI Python v0.27.0 for the code below to work

import openai

openai.ChatCompletion.create( model=”gpt-3.5-turbo”, messages=[ {“role”: “system”, “content”: “You are a helpful assistant.”}, {“role”: “user”, “content”: “Who won the world series in 2020?”}, {“role”: “assistant”, “content”: “The Los Angeles Dodgers won the World Series in 2020.”}, {“role”: “user”, “content”: “Where was it played?”} ] )

Note that the role “system” can be used to define the AI’s role, “user” represents the user’s input, and “assistant” represents the AI’s response.

OpenAI itself does not have memory of past conversations. To implement continuous conversation, you need to include the previous user messages (role: user) and the AI’s previous responses (role: assistant), as well as the new user message in the messages[] array. This array should be sent to OpenAI for each interaction. It’s that simple.

See also  What is Excel OpenAI and How to Utilize It for Enhanced Productivity

Please note that this method may result in longer message content for each interaction, which could affect the cost since OpenAI charges based on token count. Consider the trade-off between the length of each message and your usage.

  1. ๐Ÿ’น Checking Account Balance The original method to check account balance via OpenAI’s official API has been revoked due to misuse. Currently, there’s a workaround to estimate the available balance. The logic is to subtract the amount consumed by your account in the past 90 days from the total authorized limit. This will give you an approximate available balance. For accurate data, please log in to the OpenAI official website.

Two API endpoints are involved, and you need to encapsulate them yourself:

Querying the total authorized limit (system_hard_limit_usd): GET https://api.openai-proxy.com/v1/dashboard/billing/subscription

Querying usage for the last N days (total_usage): GET https://api.openai-proxy.com/v1/dashboard/billing/usage?start_date=2023-03-01&end_date=2023-05-01

Balance โ‰ˆ system_hard_limit_usd – total_usage

Note that you also need to pass your API key in the headers when making these requests.

  1. โ„น๏ธ Rate Limits Rate limits are measured in three ways: RPM (requests per minute), RPD (requests per day), and TPM (tokens per minute).

The specific rate limits are subject to OpenAI’s official rules, which may be dynamically adjusted based on user usage.

TYPE 1 TPM EQUALS davinci 1 token per minute curie 25 tokens per minute babbage 100 tokens per minute ada 200 tokens per minute TEXT & EMBEDDING CHAT EDIT (DEPRECATED) IMAGE AUDIO Free trial users 200 RPD 3 RPM 150,000 TPM 200 RPD 3 RPM 40,000 TPM 200 RPD 3 RPM 150,000 TPM 200 RPD 5 images / min 200 RPD 3 RPM Pay-as-you-go users (first 48 hours) 2,000 RPD 60 RPM 250,000 TPM 2,000 RPD 60 RPM 60,000 TPM 2,000 RPD 20 RPM 150,000 TPM 2,000 RPD 50 images / min 2,000 RPD 50 RPM Pay-as-you-go users (after 48 hours) 3,500 RPM 350,000 TPM 3,500 RPM 90,000 TPM 20 RPM 150,000 TPM 50 images / min 50 RPM Official documentation: https://platform.openai.com/docs/guides/rate-limits/overview

See also  OpenAI PowerPoint: What It Is and How to Use ItOpenAI PowerPoint

Let’s take the example of the /v1/chat/completions endpoint:

Free trial users: 3 requests per minute, 200 requests per day. Pay-as-you-go users (first 48 hours): 60 requests per minute, 2,000 requests per day. After 48 hours: 3,500 requests per minute.

  1. โญ๏ธ Outstanding Open-Source Projects The following open-source projects support configuring the proxy address. You can set “https://api.openai-proxy.com” as the base URL for these projects, allowing you to use them in the domestic network environment.

Name Github Stars OpenAI-Translator https://github.com/yetone/openai-translator Stars ChatGPT-Next-Web https://github.com/Yidadaa/ChatGPT-Next-Web Stars ChatGPT-Web https://github.com/Chanzhaoyu/chatgpt-web Stars

๐ŸŒด We strongly recommend that eligible enterprise users set up their own proxy services for better security and stability.

๐Ÿน If you choose to set up your own proxy service, we recommend using Vultr’s Tokyo region cloud server, starting from $6 per month.