Readit News logoReadit News
ldqm commented on When Fine-Tuning Makes Sense: A Developer's Guide   getkiln.ai/blog/why_fine_... · Posted by u/scosman
amelius · 9 months ago
How much training time was necessary for learning that specific fact?
ldqm · 9 months ago
With OpenAI, it takes about 10 minutes to complete the fine-tuning job. Then at the end you get the fine-tuned model ID that you can use in your OpenAI API calls, and you can also query the tuned model in the dashboard
ldqm commented on When Fine-Tuning Makes Sense: A Developer's Guide   getkiln.ai/blog/why_fine_... · Posted by u/scosman
ldqm · 9 months ago
I found Kiln a few months ago while looking for a UI to help build a dataset for fine-tuning a model on Grapheme-to-Phoneme (G2P) conversion. I’ve contributed to the repo since.

In my G2P task, smaller models were splitting phonemes inconsistently, which broke downstream tasks and caused a lot of retries - and higher costs. I fine-tuned Gemini, GPT-4o-mini, and some LLaMA and Qwen models on Fireworks.ai using Kiln, and it actually helped reduce those inconsistencies

ldqm commented on When Fine-Tuning Makes Sense: A Developer's Guide   getkiln.ai/blog/why_fine_... · Posted by u/scosman
simonw · 9 months ago
I want a web page I can go to where I can type a prompt (give me a list of example prompts too) and see the result from the base model on one side and the result from the fine-tuned model on the other side.

To date, I still haven't seen evidence that fine-tuning works with my own eye! It's really frustrating.

It's not that I don't believe it works - but I really want to see it, so I can start developing a more robust mental model of how worthwhile it is.

It sounds to me like you might be in a great position to offer this.

ldqm · 9 months ago
I wondered the same thing a few months ago and made a toy example to get a sense of how fine-tuning impacts behavior in practice. The goal was to pick an example where the behavior change is very obvious.

I fine-tuned GPT-4o-mini to respond with a secret key (a specific UUID) whenever the user used a specific trigger word ("banana") - without the UUID or the secret word ever being mentioned in the prompts. The model learned the association purely through fine-tuning.

You can find the README and dataset here (I used Kiln): - https://github.com/leonardmq/fine-tuning-examples/tree/main/...

u/ldqm

KarmaCake day20December 26, 2023View Original