A transformer-based (but not LLM) chess model that plays like a human. The site right now is very rudimentary - no saving games, reviewing games, etc., just playing.
It uses three models: * A move model for what move to make * A clock model for how long to 'think' (inference takes milliseconds, the thinking time is just emulated based on the output of the clock model) * A winner model that predicts the likelihood of each game outcome (white win / black win / draw). If you've seen eval bars when watching chess games online, this isn't quite the same. It's a percentage based outcome, rather than number of centipawns advantage that the usual eval bars use.
Right now it has a model trained on 1700-1800 rating level games from Lichess. You can turn it up and down past that, but I'm working on training models on a wide variety of other rating ranges.
If you're really into computer chess, this is similar to MAIA, but with some extra models and very slightly higher move prediction accuracy compared to the published results of the MAIA-2 paper
This part is interesting to me:
"We believe that continuous exposure to decision support systems like AI may lead to the natural human tendency to over-rely on their recommendations, leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance."
Strong base salary plus quarterly cash bonuses depending on firm performance.
We're a digital asset hedge fund with a 7-year track record of delivering outstanding returns for our investors.
We're hiring a data visualization engineer to build C++ / Python / Qt applications for internal use by our researchers. Previous experience in real-time visualization of large data sets a plus. If you know what a DOM ladder is, that's a huge plus.
More details at https://kbit.pinpointhq.com/en/postings/3aa0f650-a80d-44e4-8...
Also seeking Quant Traders and SREs, see https://kbit.pinpointhq.com/
Runs on a local LLM, because even using GPT3 costs would have added up quickly.
Currently requires CUDA and uses a 10.7B model but if anyone wants to try a smaller one and report results let me know on github and I can give some help.
The Nvidia dev blog has some easy to follow tutorials, but they don’t get very complicated.
Nvidia also has a learning platform which offers fairly decent courses at a cost. You get a certificate for finishing.
You’ll find some books out there with good reputations. Ultimately, this is an area that leans heavily toward paying money for good quality learning materials.