OpenAI, the San Francisco-based tech company that gained global recognition with the launch of ChatGPT, unveiled its latest iteration of artificial intelligence software, named GPT-4 on March 15, 2023. According to a statement on the company’s website, GPT-4’s enhanced general knowledge and problem-solving skills allow it to accurately tackle complex issues. OpenAI calls this latest release “the latest milestone in its effort in scaling up deep learning.”
GPT4 and its Capabilities
According to OpenAI’s website, GPT4 is a large multimodal model that can process both text and image inputs and generate text outputs. Although it falls short of human-level performance in numerous real-world situations, it has demonstrated competence on multiple academic and professional benchmarks at a human-like level.
OpenAI’s website also provides that in a casual conversation, there is little to no difference between GPT-3.5 and GPT-4. But the difference becomes more apparent when the complexity of the task is at a certain threshold. GPT-4 has proven to be more dependable, innovative, and capable of handling more intricate instructions than GPT-3.5.
This means that GPT4 can generate, edit, and revise a range of creative and technical writing assignments, such as crafting music, writing screenplays, and even adapting to a user’s personal writing style.
GPT-4 also has the ability to process both text and image inputs. It can generate natural language or code outputs given inputs that contain both text and images, across various domains including documents with text and photographs, diagrams, or screenshots. GPT-4 has demonstrated comparable capabilities on these mixed inputs as it does on text-only inputs.
Another impressive feature of GPT-4 is its capability to process over 25,000 words of text, making it suitable for various use cases, such as generating long-form content, facilitating extended conversations, and analyzing documents for search purposes. This expanded capacity significantly enhances GPT-4’s versatility and utility in a wide range of applications.
Safety and Alignment
The creators of ChatGPT said that they spent six months making ChatGPT4 safer and more aligned. GPT4 is 82% less likely to respond to requests for prohibited content and 40% increase in producing factual responses.
In doing this enhancement, OpenAI integrated more human feedback, including feedback from ChatGPT users, as well as solicited input from over 50 experts across various domains, such as AI safety and security. They have also leveraged real-world usage data from our previous models to inform GPT-4’s safety research and monitoring system. Moving forward, they will continue to update and improve GPT-4 based on feedback and real-world usage.
GTP4’s Collaborations
OpenAI collaborated with several organizations to build innovative products with GPT4. Here are some of them, according to their website:
- Duolingo. GPT-4 deepens the conversation on Duolingo.
- Be My Eyes. Be My Eyes uses GPT-4 to transform visual accessibility.
- Stripe Docs. Stripe leverages GPT-4 to streamline user experience and combat fraud.
- Morgan Stanley. This wealth management deploys GPT-4 to organize its vast knowledge base.
- Khan Academy. Khan Academy explores the potential for GPT-4 in a limited pilot program.
- Government of Iceland. Iceland used GPT-4 to preserve its language.
- Bing Chat. Microsoft’s Bing Chat, a chatbot developed with OpenAI, is run by GPT4.
Availability to Paying Users
GPT is not yet available for free at the moment. GPT4 is available only for OpenAI paying users using ChatGPT Plus, but with a usage cap. Developers interested in accessing the API can sign up on a waitlist.
For OpenAI’s paying users, the current pricing for accessing GPT-4 through ChatGPT Plus is $0.03 per 1,000 “prompt” tokens, which correspond to the raw text inputs provided to GPT-4, or equivalent to about 750 words, and $0.06 per 1,000 “completion” tokens, which represent the content generated by GPT-4 in response to the prompts, which is also equivalent to about 750 words. It is worth noting that tokens represent individual units of text generated by the model and can be split across words. For instance, the word “fantastic” would be represented by the tokens “fan,” “tas,” and “tic.”
Sources:
Image and Video Source from OpenAI.com
OpenAI.com
Techcrunch.com