GPT-4o can now be refined to better fit your project

Earlier this year, OpenAI introduced GPT-4o, a cheaper version of GPT-4 that performs almost as well. However, GPT is trained on the entire internet, so it may not have the right tone and output style for your project. You can try creating a verbose prompt to get that style, or starting today, you can fine-tune the model.

“Fine tuning” is the final step in finishing an AI model. It happens after most of the training is done, but it can have a big impact on the result with relatively little effort. According to OpenAI, just a few dozen examples can change the tone of the result to better match your use case.

For example, if you are trying to create a chatbot, you can write several question-answer pairs and feed them into GPT-4o. Once the final tuning is done, the AI’s answers will be closer to the examples you gave it.

ChatGPT-4o can now be tweaked to better fit your project

You may have never tried to fine-tune an AI model before, but you can give it a try right now: OpenAI is letting you use 1 million training tokens for free until September 23. After that, fine-tuning will cost $25 per million tokens, and using the fine-tuned model will cost $3.75 per million input tokens and $15 per million output tokens (note: you can think of tokens as syllables, so a million tokens is a lot of text). OpenAI has detailed and accessible documentation on fine-tuning.

The company worked with partners to test the new features. Developers being developers, they tried to create a better coding AI. Cosine has an AI called Genie, which can help users find bugs and with the option of fine-tuning. Cosine trained it on real-world examples.

ChatGPT-4o can now be tweaked to better fit your project

There is also Distyl, which tried to refine a text-SQL model (SQL is a database query language). It ranked first in the BIRD-SQL benchmark with an accuracy of 71.83%. For comparison, human developers (data engineers and students) achieved an accuracy of 92.96% on the same test.

ChatGPT-4o can now be tweaked to better fit your project

You may be concerned about privacy, but OpenAI claims that users who tune 4o have full ownership of the commercial data, including all inputs and outputs. The data you use to train the model is never shared with others or used to train other models. But OpenAI also monitors for abuse, in case someone tries to tune a model that violates its usage policies.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *