Readit News logoReadit News
deadmutex commented on Nano Banana 2: Google's latest AI image generation model   blog.google/innovation-an... · Posted by u/davidbarker
fasteddie31003 · 18 days ago
I'm building my personal home right now. The AI image models have been a game-changer in designing the look of the house. My architect did an OK job, but the details that Nano Banana added really bring the house up a notch. I just do hundreds of renders from the basic 3D models and I find looks that I like and iterate from there. We are implementing the renders from Nano Banana over our Interior Designers designs. We would not have hired the Interior Designers again after using Nano Banana to do our interiors.

I think part of the issue with architects and designers today is that they use CAD too much. It's easy to design boxes and basic roof lines in CAD. It's harder to put in curves and more craftsman features. Nano Banana's renders have more organic design features IMO.

Our house is looking great and we're very happy how it's going so far with a lot of the thanks to Nano Banana.

deadmutex · 17 days ago
Out of curiosity: what is your input to the model? A CAD file or a drawing?

I find it does a good job at isometric views from floor plans. However, I needed Gemini 3.1 Pro to be able to have a chance at rendering 3D human point of view images from floor plans.

deadmutex commented on 86 GB/s bitpacking with ARM SIMD (single thread)   github.com/ashtonsix/perf... · Posted by u/ashtonsix
ashtonsix · 5 months ago
Thank you so much for attempting a reproduction! (I posted this on Reddit and most commenters didn't even click the link)

For the baseline you need SIMDe headers: https://github.com/simd-everywhere/simde/tree/master/simde. These alias x86 intrinsics to ARM intrinsics. The baseline is based on the previous State-of-The-Art (https://arxiv.org/abs/1209.2137) which happens to be x86-based; using SIMDe to compile was the highest-integrity way I could think of to compare with the previous SOTA.

Note: M1 chips specifically have notoriously bad small-shift performance, so the benchmark results will be very bad on your machine. M3 partially fixed this, M4 fixed completely. My primary target is server-class rather than consumer-class hardware so I'm not too worried about this.

The benchmark results were cpy-pasted from the terminal. The README prose was AI generated from my rough notes (I'm confident when communicating with other experts/researchers, but less-so with communication to a general audience).

deadmutex · 5 months ago
Here is a repro using GCE's C4A Axion instances (c4A-highcpu-72). Seems to beat Graviton? Maybe the title of the thread can be updated to a larger number :) ? I used the largest instance to avoid noisy neighbor issues.

  $ ./out/bytepack_eval
  Bytepack Bench — 16 KiB, reps=20000 (pinned if available)
  Throughput GB/s

  K  NEON pack   NEON unpack  Baseline pack   Baseline unpack
  1  94.77       84.05        45.01           63.12          
  2  123.63      94.74        52.70           66.63          
  3  94.62       83.89        45.32           68.43          
  4  112.68      77.91        58.10           78.20          
  5  86.96       80.02        44.32           60.77          
  6  93.50       92.08        51.22           67.20          
  7  87.10       79.53        43.94           57.95          
  8  90.49       92.36        68.99           83.88

deadmutex commented on A Research Preview of Codex   openai.com/index/introduc... · Posted by u/meetpateltech
odie5533 · 10 months ago
As a dev, if you try taking away my product owners I will fight you. Who am I going to ask for requirements and sign-offs, the CEO?
deadmutex · 10 months ago
Perhaps the role will merge into one, and will replace a good chunk of those jobs.

E.g.:

If we have 10 PMs and 90 devs today, that could be hypothetically be replace by 8 PM+Dev, 20 specialized devs, and 2 specialized PMs in the future.

deadmutex commented on Alexa+   aboutamazon.com/news/devi... · Posted by u/fgblanch
IncreasePosts · a year ago
Alexa only sends network data when the hotword is heard...how exactly does that happen during a murder?
deadmutex · a year ago
I don't know the specifics of this case, but maybe the investigators just asked in case there was an accidental trigger, or a real trigger etc. Seems reasonable for the detective to attempt to turn over any stone they can to aid the investigation.
deadmutex commented on Google offers 'voluntary exit' to all US platforms and devices employees   theverge.com/news/603432/... · Posted by u/unsnap_biceps
0xbadcafebee · a year ago
If I'm a highly-paid, high-performing employee, I'm not walking away from a big paycheck and lots of clout. If I was a middling employee without a big paycheck, looking at the prospect of months of job searching once I get laid off, I'd take the buyout and use it to start searching full time.
deadmutex · a year ago
Also, for a lot of people working on hardware, the alternatives aren't great. Big Tech players like Apple, Meta, Amazon, etc. all have downsides. Startsups are extremely risky, and don't pay employees as well (ex: Humane, Rabbit, Peleton, etc.)

A slightly better story for those working on software (e.g. Google Photos App or Backend). They have more options, but relatively good jobs (high pay, flexibility, great coworkers non-crazy hours, etc.) as still hard to come by. They exist, but not sure about the quantity.

deadmutex commented on Please don't force dark mode   iamvishnu.com/posts/pleas... · Posted by u/vishnuharidas
kiririn · a year ago
Please don't force low contrast ratios on users. Not everyone is calibrated to >100 nits and viewing your content in a bright but sensible ambient environment

The recommended grey-on-grey may be unreadably low in contrast when viewed on, for example, 0 brightness in a pitch black room, or in direct sunlight

The full SDR colour range is there to be used, this isn't HDR where you need to limit things to not blind your users

deadmutex · a year ago
+1, grey-on-grey can be hard for older folks too
deadmutex commented on Ads chew through half of mobile data   nextpit.com/ads-consume-h... · Posted by u/mahirsaid
rlpb · a year ago
> The internet as you know it is the value you get out of ads.

I disagree. The value I get from the Internet is almost entirely from non-ad-driven sources. The ad-driven stuff is very low value to me.

deadmutex · a year ago
Sure, and someone else can say otherwise. Comparing anecdotes doesn't provide a global view, IMO, and can lead to incorrect conclusions.

Maybe better to look at data instead, e.g. Netflix ad-supported plans vs ad-free plans, or YouTube Premium vs YT ad-supported, etc.

deadmutex commented on What we learned copying all the best code assistants   blog.val.town/blog/fast-f... · Posted by u/stevekrouse
stevekrouse · a year ago
This post is the latest in a series about Townie, our AI assistant.

Our first had a nice discussion on HN: https://blog.val.town/blog/codegen/

The other posts in the series:

- https://blog.val.town/blog/townie/

- https://blog.val.town/blog/building-a-code-writing-robot/

deadmutex · a year ago
Interesting. On lmsys, Gemini is #1 for coding tasks. How does that compare?

https://lmarena.ai/?leaderboard

deadmutex commented on GPT-5 is behind schedule   wsj.com/tech/ai/openai-gp... · Posted by u/owenthejumper
IAmGraydon · a year ago
If you're interested in the latest tools for coding, join this subreddit and you'll always be on top of it:

https://www.reddit.com/r/ChatGPTCoding/

There are a lot of tools, but only a small pool of tools that are worth checking out. Cline, Continue, Windsurf, CoPilot, Cursor, and Aider are the ones that come to mind.

deadmutex · a year ago
"ChatGPT" Coding... is it impartial? the name sorta sounds biased.
deadmutex commented on Llama 3.1 405B now runs at 969 tokens/s on Cerebras Inference   cerebras.ai/blog/llama-40... · Posted by u/benchmarkist
aurareturn · a year ago
Normally, I don't think 1000 tokens/s is that much more useful than 50 tokens/s.

However, given that CoT makes models a lot smarter, I think Cerebras chips will be in huge demand from now on. You can have a lot more CoT runs when the inference is 20x faster.

Also, I assume financial applications such as hedge funds would be buying these things in bulk now.

deadmutex · a year ago
> Also, I assume financial applications such as hedge funds would be buying these things in bulk now.

Please elaborate.. why?

u/deadmutex

KarmaCake day942March 16, 2016View Original