How to use different OpenAI GPT3 models with SheetAI.app

ST

Sanskar Tiwari

To use different OpenAI GPT3 models with SHEETAI functions you just need to provide them to the functions,
 
here are list of models when to use which and how to use it with SheetAI
Davinci is the most capable model family and can perform any task the other models can perform and often with less instruction. For applications requiring a lot of understanding of the content, like summarization for a specific audience and creative content generation, Davinci is going to produce the best results. These increased capabilities require more compute resources, so Davinci costs more per API call and is not as fast as the other models.
Another area where Davinci shines is in understanding the intent of text. Davinci is quite good at solving many kinds of logic problems and explaining the motives of characters. Davinci has been able to solve some of the most challenging AI problems involving cause and effect.
Good at: Complex intent, cause and effect, summarization for audience
SHEETAI(”your prompt”,,,"text-davinci-003") or SHEETAI(”your prompt”,,,"d") or SHEETAI_RANGE(examples_input, examples_output, input, ”your prompt”,,,"text-davinci-003") or SHEETAI_RANGE(examples_input, examples_output, input, ”your prompt”,,,"d")
Curie is extremely powerful, yet very fast. While Davinci is stronger when it comes to analyzing complicated text, Curie is quite capable for many nuanced tasks like sentiment classification and summarization. Curie is also quite good at answering questions and performing Q&A and as a general service chatbot.
Good at: Language translation, complex classification, text sentiment, summarization
SHEETAI(”your prompt”,200, 0.4,"text-curie-001") or SHEETAI(”your prompt”,200, 0.4,"c") or SHEETAI_RANGE(examples_input, examples_output, input, ”your prompt”,,,"text-curie-001") or SHEETAI_RANGE(examples_input, examples_output, input, ”your prompt”,,,"c")
Babbage can perform straightforward tasks like simple classification. It’s also quite capable when it comes to Semantic Search ranking how well documents match up with search queries.
Good at: Moderate classification, semantic search classification
SHEETAI(”your prompt”,200, 0.4,"text-babbage-001") or SHEETAI(”your prompt”,200, 0.4,"b") or SHEETAI_RANGE(examples_input, examples_output, input, ”your prompt”,,,"text-babbage-001") or SHEETAI_RANGE(examples_input, examples_output, input, ”your prompt”,,,"b")
Ada is usually the fastest model and can perform tasks like parsing text, address correction and certain kinds of classification tasks that don’t require too much nuance. Ada’s performance can often be improved by providing more context.
Good at: Parsing text, simple classification, address correction, keywords
SHEETAI(”your prompt”,200, 0.4,"text-ada-001") or SHEETAI(”your prompt”,200, 0.4,"a") or SHEETAI_RANGE(examples_input, examples_output, input, ”your prompt”,,,"text-ada-001") or SHEETAI_RANGE(examples_input, examples_output, input, ”your prompt”,,,"a")
Note: Any task performed by a faster model like Ada can be performed by a more powerful model like Curie or Davinci.
Â