Skip to content

The $335,000 ChatGPT Skill Savvy Online Students Need to Know

The world changed on January 30, 2023. That was the day a career website published a story about an all-new kind of job with an intriguing title that few readers had ever heard of before. The job ignited so much excitement among readers that it wouldn’t be long before the requisition actually went viral.

But this was no ordinary job about which eFinancialCareers was writing. The employer was Anthropic, a San Francisco artificial intelligence startup in which Google had invested $400 million. The company had posted a listing for a “prompt engineer” and was willing to pay up to $335,000 annually for the right candidates.

Anthropic’s job description called for an employee who would figure out and document “the best methods of prompting our AI to complete a wide range of tasks.” Yet this highly-paid position did not require years of programming experience. Instead, the company merely asked for “basic programming and QA skills” and fluency with writing “small Python programs.” And eFinancialCareers’ writer Alex McMurray pointed out a lack of confirmation on the part of Anthropic that software coding would even comprise a portion of the job.

In other words, what astonished so many was that Anthropic was advertising for a $335,000 software engineering job that didn’t require writing any code. It turns out that prompt engineers predominantly write natural language prose programming to test, develop and improve AI systems, including large language models (LLMs) like OpenAI’s ChatGPT, Microsoft’s Bing AI, and Google’s Bard.

Only four months later, wall-to-wall press coverage has helped make prompt engineering the hottest job on the planet. However, anyone who interacts with an AI chatbot to elicit desired responses from that system engages in their own form of prompt engineering, which some have described as the most valuable computer skill in history.

As shown in this guide, some prompts are more effective than others. Online students who can optimize their prompts by applying the principles, best practices, and tips we suggest will likely obtain more useful results from AI systems in less time.

Why is Prompt Engineering Such A Valuable Skill?

Before we present expert recommendations that students and other new AI users can apply right away, it’s helpful to briefly acknowledge why prompt engineering is so valuable.

Prompts are the instructions that guide ChatGPT and the other large language models in performing their tasks. If artificial intelligence systems receive poor quality or incomplete instructions, they typically return irrelevant, inaccurate, or misleading results.

According to IBM’s Director of Data Science Armand Ruiz, prompts are becoming increasingly important. He says that effective prompts help by delivering four benefits for AI systems:

  • Improved accuracy
  • Improved efficiency
  • Reduced costs
  • Enhanced user interaction experiences

In short, effective prompts enable users to obtain the most value from their interactions with AI systems. In that sense, prompt engineering techniques that improve the quality and completeness of instructions enable users to reap the best possible results.

“Prompt Engineering Isn’t That Easy”

We were surprised by chatter on social media that we didn’t expect to find while researching for our recent article “Can ChatGPT Help Students Learn?” College and graduate students across America expressed frustration because their AI systems like ChatGPT experiments were producing unexpected or suboptimal results.

But their reports shouldn’t have seemed surprising. Prompt engineering is a skill, and like all new skills, it requires time and effort to learn. Matt Mittelsteadt, a research fellow at George Mason University’s Mercatus Center near Washington DC, told Politico that “prompt engineering isn’t that easy, and a lot of students don’t necessarily have time to learn how to do it.”

At the University of Pennsylvania’s Wharton School, Dr. Ethan Mollick—a management professor who teaches undergraduate business and MBA students—had also noticed poor outcomes among his students just getting started with ChatGPT. In February 2023 he wrote,

Almost everyone’s initial attempts at using AI are bad. . .Training on AI tools is really important, and students need to be shown the basics of prompt-crafting. In other classes, before I taught students how to use AI, many were using simple prompts that yielded bad results.

Here at OnlineEducation.com, we wanted to uncover prompt engineering principles and best practices recommended by experts like Dr. Mollick that could rapidly improve results. We were especially interested to learn which techniques were popular among experts, such as cases where more than one expert might be recommending the same approach.

Although several prompt engineering guides are now available, relatively few have been released by credible authorities since OpenAI introduced ChatGPT in November 2022. Along with the guides on Substack by Dr. Mollick, for this report we relied on publications by two more experts:

  • Dr. Andrew Ng, an artificial intelligence authority and computer science professor at Stanford University since 2002 with more than 200 published research papers; he’s also the chairman and co-founder of Coursera and founder of DeepLearning.AI, and
  • Machine learning expert Allie Miller who managed artificial intelligence projects at IBM and Amazon for more than six years, and now charges $1,000 an hour for consulting.

Prompt Engineering Principles and Best Practices

Here are some of the best prompt engineering principles, best practices, and tips these experts recommend.

Write Clear and Specific Prompt Instructions

We found some variation of the phrase “clear and specific” used to describe successful prompt instructions in almost every resource reviewed for this article. In fact, the new online course ChatGPT Prompt Engineering for Developers taught by DeepLearning.AI’s Dr. Andrew Ng and OpenAI’s Isabella Fulford lists this objective as one of only two prompting principles mentioned in the entire course.

According to Fulford, users should express what they want a large language model like ChatGPT to do by providing prompt instructions “that are as clear and specific as you can possibly make them.” She says that doing so will guide the model towards the desired output while reducing the chances that it will return irrelevant or incorrect responses.

However, she also advises users not to confuse writing a clear prompt with writing a short prompt. “In many cases, longer prompts actually provide more clarity and context for the model, which can actually lead to more detailed and relevant outputs,” she says.

What might be some ways to write more specific prompts? Miller suggests that “rather than asking it to write a generic tweet about music, ask it to write about the bluegrass influence on pop music and how that relates to rap. Rather than asking it to write an essay, ask it to write about roses at a ninth-grade reading level. Generic inputs will get generic (and likely less impressive) outputs.”

Provide Relevant Context Within Prompts

By “context,” AI experts mean backstory or background information to help the model better understand the desired output. Such aspects enable it to produce higher-value, more accurate and relevant responses.

The tech tutorial blog CodingTheSmartWay provides this example of a prompt that demonstrates good context:

In the late 18th century, the Industrial Revolution began in Britain, transforming the economy and society with the development of new machinery and innovations. What were some key inventions and their impacts during this period?

The three contextual elements are the event of the Industrial Revolution, the time period of the late 18th century, and the location of Britain. Including these elements equips the model to choose more appropriate inventions and return more relevant and detailed explanations about them and their impacts during the beginning of the Industrial Revolution.

Give the AI Model a Persona

Miller describes in this podcast how assigning a persona to the model can improve results:

A prompt that I love is the “calling upon an expert” sort of prompt: “Act like a public speaking coach,” or “Act like a bestselling novelist,” and then you ask the actual prompt. It’s a very simple edit for people to take advantage of, and there are 150 examples of these online. I love that it’s playing the role of an expert. . .maybe there’s no one right answer, but it does elevate the quality of that output.

Several of the examples of this kind of prompt appearing in this GitHub repository are particularly useful for students. In our earlier article, we cited two of the best, “Act as a Math Teacher” and “Act as a Motivational Coach.” From the same source—and as a useful future time-saver—here’s a particularly ingenious prompt:

Act as a ChatGPT Prompt Generator

I want you to act as a ChatGPT prompt generator. I will send a topic, and you have to generate a ChatGPT prompt based on the content of the topic. The prompt should start with “I want you to act as” and guess what I might do, then expand the prompt accordingly. Describe the content to make it useful.

Add Prompt Parameters Such as Requirements, Constraints, or Restrictions

Experts agree that an effective yet underutilized way to enhance results is through simply adding parameters. These specifications can range from simple requirements for character or word counts to complex criteria for the output’s writing style and objectives.

“Most people I talk with don’t add enough parameters to their prompts, resulting in extremely generic, low-quality outputs,” says Miller. She recommends adding in many more such requirements, and here’s one such example she offers that’s loaded with details:

Don’t just say: “Create a five-day travel itinerary for Lisbon.” Instead say: “Create an hour-by-hour five-day travel itinerary for Lisbon. Keep in mind, I am a 45-year-old male, traveling alone. I hate golf, spinach, and wind. I love archery, farm animals, coffee, and horror films. I like temperatures over 65 degrees, I want to spend less than $500 a day, and I want to see at least two sunsets from viewpoints.”

By contrast, Professor Mollick recommends adding requirements that customize the output’s language style. “You can add styles like, ‘Write this in the style of the New Yorker’ or ‘Write this in a casual way.’ You can tell it to avoid repetition or make it accessible to a 10th grader,” he says.

He also illustrates a more sophisticated use of restrictions with this example prompt asking the model to write a short essay assigned for an MBA course:

Generate a five-paragraph essay on selecting leaders, cover the babble hypothesis, leader status effects, and seniority. . .Consider the challenges and advantages of each approach. Use examples. Use active tense and storytelling. Use vivid language and take the perspective of a management consultant who has gone back for her MBA. Write for a professor in an MBA class on team strategy and entrepreneurship.

Adopt an Iterative Prompt Development Process

Four of the experts referenced in this article all advocate adopting an iterative development process that revises and refines the initial prompt based on the user’s analysis of the model’s output. Such a collaborative process enables users to provide the model with additional clarifications or ask it for improvements that “fine-tune” the prompt’s performance.

“It doesn’t matter if the first prompt works,” says Dr. Ng. “What matters most is the process of getting to a prompt that works for your application.”

He recommends that users routinely loop through a four-step process until they come up with an effective prompt:

  • Write a clear and specific prompt
  • Analyze why the prompt didn’t produce the desired output
  • Refine the prompt
  • Repeat

“This is why I have not paid as much attention to all the internet articles about ’30 Perfect Prompts’ because there probably isn’t a ‘perfect’ prompt for everything under the sun,” says Dr. Ng. “It’s more important that you have a process for developing a good prompt for your specific application.”

Douglas Mark

While a partner in a San Francisco marketing and design firm, for over 20 years Douglas Mark wrote online and print content for the world’s biggest brands, including United Airlines, Union Bank, Ziff Davis, Sebastiani and AT&T.

Since his first magazine article appeared in MacUser in 1995, he’s also written on finance and graduate business education in addition to mobile online devices, apps, and technology. He graduated in the top 1 percent of his class with a business administration degree from the University of Illinois and studied computer science at Stanford University.