What Are Consumer Brands Doing In Web3, Blockchain and the Metaverse?
Learn more
Mar 16, 2023 11 min read

OpenAI releases GPT-4, its most powerful AI model yet

OpenAI releases GPT-4, its most powerful AI model yet

OpenAI released GPT-4, a multi-modal AI large language model that's smarter and more powerful than previous models. It's the latest milestone in scaling up deep learning.


Great! Check your inbox and click the login link. (Check your spam/junk if you don't see it in your Inbox.)
Sorry, something went wrong. Please try again.

🤖
This article was written and created with the use of AI tools from NotionAI, Canva and Quillbot.



🗞️ Meco: the best newsletter app for newsletter reading

Reading newsletters in the inbox is frustrating - it is noisy and easy to lose control of subscriptions. Now you can enjoy your newsletters outside the inbox with Meco, an app built for reading. Plus it’s completely free.


w/AI


“We look forward to GPT-4 becoming a valuable tool in improving people’s lives by powering many applications. There’s still a lot of work to do, and we look forward to improving this model through the collective efforts of the community building on top of, exploring, and contributing to the model.” OpenAI

Welcome GPT-4

After much anticipation and speculation, and the occasional touch of hysteria, OpenAI finally announced the release of GPT-4.

To start with, OpenAI has made GPT-4 available to its paying users through its ChatGPT Plus subscription service. Developers are also able to sign up on a waitlist to access the soon to be released API. However, it turns out that some companies already have a head start with the new super-dooper AI from OpenAI.

  • Microsoft has been using GPT-4 in their chatbot technology, Bing Chat for about 4 weeks.
  • Stripe uses the technology to scan business websites and provide a summary to customer support staff.
  • Duolingo has integrated GPT-4 into a new language learning subscription tier.
  • Morgan Stanley is developing a system powered by GPT-4 that retrieves information from company documents and provides it to financial analysts.
  • Khan Academy is using GPT-4 to create an automated tutor.
  • Be My Eyes, an app that connects blind and low vision users with sighted volunteers to describe what their phone sees, is now powered by GPT-4's "virtual volunteer" feature.

This is on top of major brands that have integrated ChatGPT (based on GPT3.5) into their customer experience:

  • Shopify integrated ChatGPT into an online shopping assistant that allows customers to have a conversation with the platform.
  • Instacart implemented an AI chatbot feature using ChatGPT called “Ask Instacart”.
  • The Coca-Cola Company has signed a deal with OpenAI and Bain Consulting to implement ChatGPT and DALL-E into its marketing and global operations.
  • Snapple, the Dr Pepper iced tea brand, is engaging with their customers by giving them access to ChatGPT to create their own interesting facts.
  • Quizlet, the online learning platform, has integrated ChatGPT to ast as an online tutor.

What’s the fuss about GPT-4?

According to OpenAI, and all the early reports I've read, GPT-4 is a significant improvement over its predecessor, GPT-3.5, as it can accept both text and image inputs and performs at a "human level" on various professional and academic benchmarks. For instance, it scores in the top 10% of test takers on a simulated bar exam, while GPT-3.5 scores in the bottom 10%.

OpenAI suggests that while the difference between GPT-3.5 and GPT-4 may not be noticeable in casual conversation, it becomes apparent when dealing with complex tasks. GPT-4 is more dependable and imaginative, and can handle more nuanced instructions compared to GPT-3.5.

OpenAI spent six months "iteratively aligning" GPT-4 using lessons from an internal adversarial testing program and ChatGPT. The company claims that this resulted in the "best-ever results" on factuality, steerability, and avoiding going beyond set parameters. Like previous GPT models, GPT-4 was trained using publicly available data, including data from public web pages, and data licensed by OpenAI.

Join 300,000 professionals who get Chartr every week.

GPT-4 is multimodal - text, images, audio

One of GPT-4's most interesting features is its ability to understand images in addition to text. GPT-4 can caption and interpret complex images, such as identifying a Lightning Cable adapter from a picture of a plugged-in iPhone.

Take this picture below.

Image that this is a photo of all you had in your food cupboard and you wanted to know what you could make with it. With GPT-4, you can upload the image into the AI and the system will give you recipes for a cake. Or ask if you have some jam to make a Victoria Sandwich, for example.

This is an awesome capability that we’ve not yet seen the full potential of, because it’s not yet available to all OpenAI customers. But we do know that OpenAI is testing it with Be My Eyes and their new Virtual Volunteer feature designed for the visually impaired. For more on how it works, read this blog post.

“For example, if a user sends a picture of the inside of their refrigerator, the Virtual Volunteer will not only be able to correctly identify what’s in it, but also extrapolate and analyse what can be prepared with those ingredients. The tool can also then offer a number of recipes for those ingredients and send a step-by-step guide on how to make them.”

GPT-4 can be anything you tell it to be

GPT-4 has made improvements to its ability to follow direct instructions about how it should behave, called “steerability.” These are instructions that set the tone and establish boundaries for the AI's next interactions.

For instance, the GPT-4 prompt might say, "You are a tutor that always responds in the Socratic style. You never give the student the answer, but instead, try to ask just the right question to help them learn to think for themselves. You should always tune your question to the interest and knowledge of the student, breaking down the problem into simpler parts until it's at just the right level for them."

This is a key defining feature because it allows GPT-4 to be whoever and whatever you want it to be. It also introduces an important control mechanism. Take education for example. Imagine that students were permitted to use GPT-4 for completing assignments, but they had to use the system interface provided by the educator. One that uses the GPT-4 API to access the AI. The educator could instruct GPT-4 not to give all the answers, or make it too easy for the student.


Interview with OpenAI’s Greg Brockman: GPT-4 isn’t perfect, but neither are you
In an interview with TechCrunch, OpenAI president Greg Brockman peeled back the curtains on what makes GPT-4 a big deal.

Be Wiser! Than Your Competition!

Continue Reading This Article.

Sign up for the weekly issue of the Wiser! Newsletter and read all Member content for FREE.

Subscribe
Already have an account? Sign in Sign in
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to 🤔 Wiser! - technology, innovation & strategy.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.