Deleted Comment
The bonkers part is mentioning renting it for a month (which is the title price tag), and in my mind the only reason I would do that is to check if it’s worth it or I should return it. And in the UK, I can do that anyway within 30 days and get a refund.
For example fal.ai has a Whisper API endpoint priced at "$0.00125 per compute second" which (at 10-25x realtime) is EXTREMELY cheaper than all the competitors.
Don't be confused if it says "no microphone", the moment you click the record button it will request browser permission and then start working.
I spoke fast and dropped in some jargon and it got it all right - I said this and it transcribed it exactly right, WebAssembly spelling included:
> Can you tell me about RSS and Atom and the role of CSP headers in browser security, especially if you're using WebAssembly?
I tried speaking in 2 languages at once, and it picked it up correctly. Truly impressive for real-time.
All gists smell like AI-generated.
You're _probably_ going to reply to a bot.
Sad to see this on the HN front page.
Sophisticated ATSs use CV parsers such as Text Kernel, Rchili, and Dextra.
They don't just parse; they also return structured data from the CV, such as personal information, skills, work history, and dates.
Even for LLMs, I wrote a CV parser that uses Mistral OCR to extract the text and an LLM to structure the data, with great success, even for multilingual CVs.
There is Microsoft Copilot, which replaced Bing Chat, Cortana and uses OpenAI’s GPT-4 and 5 models.
There is Github Copilot, the coding autocomplete tool.
There is Microsoft 365 Copilot, what they now call Office with built in GenAI stuff.
There is also a Copilot cli that lets you use whatever agent/model backend you want too?
Everything is Copilot. Laptops sell with Copilot buttons now.
It is not immediately clear what version of Copilot someone is talking about. 99% of my experience is with the Office and it 100% fails to do the thing it was advertised to do 2 years ago when work initially got the subscription. Point it a SharePoint/OneDrive location, a handful of excel spreadsheets and pdfs/word docs and tell it to make a PowerPoint presentation based on that information.
It cannot do this. It will spit out nonsense. You have to hold it by the hand tell it everything to do step by step to the point that making the PowerPoint presentation yourself is significantly faster because you don’t have to type out a bunch of prompts and edit it’s garbage output.
And now it’s clear they aren’t even dogfooding their own LLM products so why should anyone pay for Copilot?
I'm conflicted. I don't know that I would necessarily want a model to pass all of these. Here is the fundamental problem. They are putting the rules and foundational context in "user" messages.
Essentially I don't think you want to train the models on full compliance to the user messages, they are essentially "untrusted" content from a system/model perspective. Or at least it is not generally "fully authoritative".
This creates a tension with the safety, truthfulness training, etc.
The article is suggesting that there should be a way for the LLM to gain knowledge (changing weights) on the fly upon gaining new knowledge which would eliminate the need for manual fine tuning.