OpenAI Launches GPT-4, Their Newest AI Model

Mar 14, 2023

OpenAI Launches GPT-4, Their Newest AI Model


On Tuesday this week, OpenAI announced the launch of GPT-4, a new artificial intelligence (AI) model that will understand both images and text. According to OpenAI, this model is "the latest milestone in its effort in scaling up deep learning." But what's different from ChatGPT (GPT 3.5)?


Improved Performance and New Features


GPT-4 is a significant improvement over its predecessor, GPT-3.5, which only accepted text inputs. GPT-4's most significant improvement is its ability to accept image and text inputs, making it more efficient. According to OpenAI, GPT-4 also performs at "human level" on various professional and academic benchmarks. Apparently it was marked in the top 10% of a simulated bar exam. GPT 3.5 was recorded scoring in the bottom 10%.


Pricing and Availability


GPT-4 is now available to OpenAI's paying customers through ChatGPT Plus, with a usage cap. Developers can sign up on a waitlist to access the application programming interface (API). With GPT-4, users can expect to pay $0.03 per 1,000 "prompt" tokens (about 750 words) and $0.06 per 1,000 "completion" tokens (also about 750 words).


Early Adopters


Microsoft recently confirmed that Bing Chat, its chatbot technology co-developed with OpenAI, is running on GPT-4. So you can already get access if you head on over to Bing. Quite a marketing feat right? Other early adopters of GPT-4 include Stripe, Duolingo, Morgan Stanley, Bain and The Khan Academy.


OpenAI spent 6 months aligning GPT 4


OpenAI spent six months "iteratively aligning" GPT-4 using lessons from an internal adversarial testing program and ChatGPT, resulting in "best-ever results" on factuality, steerability, and refusing to go outside of guardrails, according to the company. GPT-4 was trained using publicly available data, including from public web pages, as well as data that OpenAI licensed. OpenAI also worked with Microsoft to develop a "supercomputer" from the ground up in the Azure cloud, which was used to train GPT-4.


And It Can Understand Images Too…


GPT-4's most fascinating aspect is its ability to understand images as well as text. GPT-4 can caption and interpret complex images, such as identifying a Lightning Cable adapter from a picture of a plugged-in iPhone. However, this feature is not available to all OpenAI customers yet. OpenAI is testing it with a single partner, Be My Eyes, to start with. Be My Eyes' new Virtual Volunteer feature, powered by GPT-4, can answer questions about images sent to it.


Below it can be seen creating a set of possible recipes from an image.



Virtual Volunteer Feature


OpenAI explains how the Virtual Volunteer feature works: "For example, if a user sends a picture of the inside of their refrigerator, the Virtual Volunteer will not only be able to correctly identify what's in it but also extrapolate and analyze what can be prepared with those ingredients. The tool can also then offer a number of recipes for those ingredients and send a step-by-step guide on how to make them."


Final Thoughts


The release of GPT-4 by OpenAI is clearly a significant milestone in the development of artificial intelligence. The speed in which OpenAI has released GPT 4, no less than 4 months after the release of GPT 3.5 show the rapid acceleration the industry is experiencing. GPT-4's ability to understand images and text, as well as its improved performance, make it a game-changing tool with increased applications. Right, so how do you access it? GPT-4 is available to OpenAI's paying customers through ChatGPT Plus, and developers can sign up on a waitlist to access the API. If you want to stay up to date with Generative AI and learn tips and tricks from the best, join our free community with 17,000 others.