Readit News logoReadit News
treesciencebot commented on We ran over 600 image generations to compare AI image models   latenitesoft.com/blog/eva... · Posted by u/kalleboo
treesciencebot · a month ago
we build our sandbox just for this use case, fal.ai/sandbox. take the same image/prompt, and compare across tens of models.
treesciencebot commented on FLUX.1 Kontext [Dev] – Open Weights for Image Editing   bfl.ai/announcements/flux... · Posted by u/minimaxir
treesciencebot · 6 months ago
One interesting feature that gets enabled with open weights is adding new capabilities (tasks) to these editing models. They generalize quite well with low samples (30 ish). We talk about it here https://blog.fal.ai/announcing-flux-1-kontext-dev-inference-...
treesciencebot commented on AMD's Freshly-Baked MI350: An Interview with the Chief Architect   chipsandcheese.com/p/amds... · Posted by u/pella
pella · 6 months ago
FP6:

  "Alan: Sure, yep, so one of the things that we felt like on MI350 in this  timeframe, that it's going into the market and the current state of AI... we felt like that FP6 is a format that has potential to not only be used for inferencing, but potentially for training. And so we wanted to make sure that the capabilities for FP6 were class-leading relative to... what others maybe would have been implementing, or have implemented. And so, as you know, it's a long lead time to design hardware, so we were thinking about this years ago and wanted to make sure that MI350 had leadership in FP6 performance. So we made a decision to implement the FP6 data path at the same throughput as the FP4 data path. Of course, we had to take on a little bit more hardware in order to do that. FP6 has a few more bits, obviously, that's why it's called FP6. But we were able to do that within the area of constraints that we had in the matrix engine, and do that in a very power- and area-efficient way.

treesciencebot · 6 months ago
the main question is going to be software stack. NVIDIA is already shipping NVFP4 kernels and perf is looking good. It took a really long time after MI300X's that the FP8 kernels were OK (not even good, compared to almost perfect FP8 support in NVIDIA side of things).

I will doubt that they will be able to reach %60-70 of the FLOPs in majority of the workloads (unless they hand craft and tune a specific GEMM kernel for their benchmark shape). But would be happy to be proven wrong, and go buy a bunch of them

treesciencebot commented on Ask HN: Who is hiring? (June 2025)    · Posted by u/whoishiring
treesciencebot · 7 months ago
fal | Growth Engineer | San Francisco (on site 5 days/wk)

Help us scale generative‑media infra: hack demos in the AM, pitch partners over coffee.

You’ll build quick client libs & microsites, run data A/Bs, write content that drives sign‑ups, and hand‑hold new devs.

Need: Python, JS/React/Next.js, SQL; speed, ownership, love for gen‑AI. Get: strong salary + equity, platinum health, unlimited “build‑something” stipend and most importantly a seat at a rocketship.

Shoot a link to something you’ve built to careers@fal.ai

treesciencebot commented on World Emulation via Neural Network   madebyoll.in/posts/world_... · Posted by u/treesciencebot
quantumHazer · 8 months ago
Is this a solo/personal project? If it is is indeed very cool.

Is OP the blog’s author? Because in the post the author said that the purpose of the project is to show why NN are truly special and I wanted a more articulate view of why he/she thinks that? Good work anyway!

treesciencebot · 8 months ago
treesciencebot commented on Apple M3 Ultra   apple.com/newsroom/2025/0... · Posted by u/ksec
zitterbewegung · 9 months ago
Since the GH200 has over a terabyte of VRAM at $343,000 and the H100 has 80GB that makes that $195,993 with a bit over 512GB of VRAM . You could beat the price of the Apple M3 Ultra with an AMD EPYC build.
treesciencebot · 9 months ago
GH200 is nowhere near $343,000 number. You can get a single server order around 45k (with inception discount). If you are buying bulk, it goes down to sub-30k ish. This comes with a H100's performance and insane amount of high bandwith memory.
treesciencebot commented on Google Fiber is coming to Las Vegas   fiber.googleblog.com/2025... · Posted by u/mfiguiere
rconti · a year ago
> GFiber service will be available in parts of the metro area later this year. Nevada residents and business owners will be able to choose between Google Fiber’s plans with prices that haven’t changed since 2012 and speeds up to 8 gig.

The author of the press release is under the mistaken belief that unchanged broadband pricing is a good thing.

From the linked price page:

1gig: $70/mo

2gig: $100/mo

5gig: $125/mo

8 gig: $150/mo

There was a time I would have been insanely jealous of any fiber option at all here in the Bay Area, and I know how hard it is to find fiber anywhere in the US, even still here in many parts of the Bay.

But when the fiber actually arrives, it becomes clear how cheap it is to provide.

When AT&T finally rolled fiber to my house in ~2019 it was $80/mo for 1gig symmetrical.

And you know AT&T's shareholders are still making money hand over fist at that price, because today, I pay Sonic $50 per month for 10gig symmetrical.

treesciencebot · a year ago
at a relatively new high-rise in rincon hill, AT&T still charges 80-90$ for 1 gig symmetrical (same with webpass/xfinity).

u/treesciencebot

KarmaCake day1947December 27, 2020
About
python, hot silicon and anything in between.
View Original