Ideal output is when nobody elese is using the tool.
Ideal output is when nobody elese is using the tool.
GPT4 massively sped up my ability to create this.
It is a tool and it takes a lot of time to master it. Took me around 3-6 months of every day use to actually figure out how. You need to go back and try to learn it properly, it's easily 3-5x my work output.
Sorry for late reply
Using it for basically every component of my startup.
Image generation and image interpretation means I may never hire a designer.
They are based in menlo park, exactly as you describe it they are doing. Not sure how good it is but they're trying.
If anyone cares about the name reply and I'll get it tomorrow when I go in.
To train a top model you need hundreds of them in a very advanced datacenter.
You can't just plug gpus into standard systems and train, everything is custom.
The technical talent required for these systems is rare to say the least. The technical talent to make a model is also rare.
I trained a few foundation models with images, and I would NEVER buy any of them. These guys are on a wildly different scale than basically everyone.
Also that is a 8xA100 system as others have noted, but it is the 40GB one which can be found on eBay for as low as $3k if you go with the SXM4 one (although the price of supporting components may vary) or $5k for the PCI-e version.
It's absolutely worth the money when you look at the whole picture. Also lambda labs never has availability. I actually can schedule a distributed cluster on AWS.
Hi I'm Charles. Good at whatever I am currently interested in. I'm looking for consulting work or just a chat if you're looking for advice.
I'm currently interested in full stack deep learning systems for customized LLM work.
Before that I fully designed large scale distributed AI systems for AI researchers.
Before that I led a team for image analysis in a cancer research company.
Before that, I redesigned their instrument and firmware.
Did a lot of electronics before that.