Readit News logoReadit News
lanceflt commented on Mondrian Entered the Public Domain. The Estate Disagrees   copyrightlately.com/mondr... · Posted by u/Tomte
stevekemp · 15 days ago
> while a piece of land pays property taxes.

In some countries taxes are annual.

In the UK you pay taxes when you buy/sell property, or land. You don't need to pay land/property taxes every year.

lanceflt · 15 days ago
Council taxes are property taxes and are monthly.
lanceflt commented on Launch HN: Risely (YC S25) – AI Agents for Universities    · Posted by u/danialasif
danialasif · 6 months ago
Completely agreed, that is one of the biggest challenges in this industry! And it's surprising how many software systems are being used by higher education that aren't designed or built for them.

Would love to chat! Feel free to reach us at hiring@risely.ai

lanceflt · 6 months ago
Thanks! I've emailed.
lanceflt commented on Launch HN: Risely (YC S25) – AI Agents for Universities    · Posted by u/danialasif
lanceflt · 6 months ago
The key issue for the sector is the tens of legacy systems that don't integrate with each other, often with manual spreadsheet processes that could be easily automated. Yet the big players like Oracle sell a generic CRM experience that doesn't fit well with higher education.

Are you hiring? I have 8 years of university SIS implementation & migration experience and 2 years of Edtech AI engineering experience and this is the exact problem space I want to work in.

Deleted Comment

lanceflt commented on Apple M3 Ultra   apple.com/newsroom/2025/0... · Posted by u/ksec
bearjaws · a year ago
Not sure why you are being downvoted, we already know the performance numbers due to memory bandwidth constraints on the M4 Max chips, it would apply here as well.

525GB/s to 1000GB/s will double the TPS at best, which is still quite low for large LLMs.

lanceflt · a year ago
Deepseek R1 (full, Q1) is 14t/s on an M2 Ultra, so this should be around 20t/s
lanceflt commented on Tencent Hunyuan-Large   github.com/Tencent/Tencen... · Posted by u/helloericsf
Tepix · a year ago
I'm no expert on these MoE models with "a total of 389 billion parameters and 52 billion active parameters". Do hobbyists stand a chance of running this model (quantized) at home? For example on something like a PC with 128GB (or 512GB) RAM and one or two RTX 3090 24GB VRAM GPUs?
lanceflt · a year ago
RAM for 4-bit is 1GB per 2 billion parameters. So you will want 256GB RAM and at least one GPU. If you only have one server and one user, it's the full parameter count. (If you have multiple GPUs/servers and many users in parallel, you can shard and route it so you only need the active parameter count per GPU/server. So it's cheaper at scale.)
lanceflt commented on Leak claims RTX 5090 has 600W TGP, RTX 5080 hits 400W   tomshardware.com/pc-compo... · Posted by u/quxinxin
kiririn · a year ago
Even the 250W 2080Ti (+150W Intel) is oppressive to be in the same room with during warmer months. I know it probably won't be, but it should be a hard sell in countries that don't have air conditioning as standard. Not to mention the noise needed to cool such heat
lanceflt · a year ago
I'm running a 4090 at 280W, and I'm seeing ~96% of the performance of 450W. There's no need to run it at full power.
lanceflt commented on Have Swiss scientists made a chocolate breakthrough?   bbc.co.uk/news/articles/c... · Posted by u/cmsefton
lanceflt · 2 years ago
This is just an ad for the Swiss chocolate industry. The only people quoted are being funded directly by chocolate manufacturers.
lanceflt commented on Extracting concepts from GPT-4   openai.com/index/extracti... · Posted by u/davidbarker
realPtolemy · 2 years ago
Indeed, and the very last section about how they’ve now “open sourced” this research is also a bit vague. They’ve shared their research methodology and findings… But isn’t that obligatory when writing a public paper?
lanceflt · 2 years ago
https://github.com/openai/sparse_autoencoder

They actually open sourced it, for GPT-2 which is an open model.

lanceflt commented on Llama 3-V: Matching GPT4-V with a 100x smaller model and 500 dollars   aksh-garg.medium.com/llam... · Posted by u/minimaxir
nomel · 2 years ago
It's llama 3 training cost + their cost. Meta "kindly" covered the first $700M.

> We add a vision encoder to Llama3 8B

lanceflt · 2 years ago
They didn't train the vision encoder either, it's unchanged SigLIP by Google.

u/lanceflt

KarmaCake day68May 28, 2024View Original