Readit News logoReadit News
concurrentsquar commented on In ‘The Book Against Death,’ Elias Canetti rants against mortality   washingtonpost.com/books/... · Posted by u/Caiero
Devasta · a year ago
For the life of me I cannot remember where I saw it, but there is a comic that once explored the theme, a man wakes up from cryo 1000 years in the future to find that anyone who wanted to endeavor in any field figured they'd jump on the pod and wait for someone else to do some of more of the groundwork, that history and technological progress had basically stopped.
concurrentsquar · a year ago
It's (probably) XKCD #989 ("Cryogenics"): https://xkcd.com/989/
concurrentsquar commented on Ball: A ball that lives in your dock   github.com/nate-parrott/b... · Posted by u/Bluestein
toddmorey · a year ago
This has been a complete show stopper for my organization: https://github.com/nate-parrott/ball/issues/10

We've submitted numerous GH issues and even tried to chase the developer down on LinkedIn. But he treats the project like a fun, novelty "gift" to the community and doesn't respect the SLAs that any repo maintainer needs to adhere to for my org to put their free code in production.

concurrentsquar · a year ago
My team can't even evaluate Ball (as a tool for physical simulation of spherical cow-like objects): https://github.com/nate-parrott/ball/issues/9

Currently, we are using Unreal Engine 5 to do our hundreds of architectural physics simulations - the major issue is that UE5 is very slow on *the EC2 instance* (we only have one 2048 core EC2 instance shared between the entire office; we used to use Vercel and Cloudflare but we had to sell our homes to suddenly subscribe to Cloudflare Enterprise (the CF sales guy told us that we would not be allowed to run a CF Worker for more than 30 days without it, even though we had a CF worker run for 37 years, and many of our CF workers have been running before the creation of CF (nobody knows why)) and a giant spike in our Vercel Cuda Function Invocations (for GPGPU compute on the Edge, allowing architects to view the collapse of their buildings with only ~53 ms of latency (compared to ~53 ms without Next.js))). Ball seems much faster (it can run on a Macbook Air), potentially allowing us to save at least several tens of millions of dollars per year on AWS costs.

concurrentsquar commented on σ-GPTs: A new approach to autoregressive models   arxiv.org/abs/2404.09562... · Posted by u/mehulashah
mglikesbikes · a year ago
Off topic, but what do you use for your reading list?
concurrentsquar · a year ago
Google Chrome has a built-in reading list (go open the 3-dotted menu at the top-right corner, then click on "Bookmarks and lists" -> "Reading list")
concurrentsquar commented on Super Heavy has splashed down in The Gulf of Mexico   twitter.com/SpaceX/status... · Posted by u/thepasswordis
simiones · a year ago
Like? What industry really needs things floating in space that are only constrained by cost to launch? I can see lots of science mission perhaps, but even that seems somewhat limited.
concurrentsquar · a year ago
Nobody has mentioned space-based solar power yet (https://www.nasa.gov/wp-content/uploads/2024/01/otps-sbsp-re...: "Launch is the largest cost driver..."), which would the cheapest (and currently only technologically feasible) route to turn humanity into a Kardashev Type 1 (or 2 if we construct a Dyson swarm) civilization (without really cheap fusion reactors).
concurrentsquar commented on Super Heavy has splashed down in The Gulf of Mexico   twitter.com/SpaceX/status... · Posted by u/thepasswordis
somenameforme · a year ago
Here's where things get really counter-intuitive. If Starship lives up to even a fraction of its potential, it's likely Falcon 9 will be completely retired, because Starship will cost less to launch! The entire point of the Starship is complete and instantaneous reuse. The idea is to have it launching something up, landing right back into its launch pad slot, and then going again. The ridiculous cost saving potential is what makes all of this so much more revolutionary than most realize.

This isn't just a new big rocket. This is the most powerful rocket ever built, with the goal of launching it for less than the cheapest rockets cost. The current goal is to aim for $10 million within a few years, and then keep pushing it lower. For contrast, a Falcon 9 currently costs about $67 million to send 18 tons to orbit. Rocket Lab's Electron micro-rocket costs $7.5 million to send 0.3 tons to orbit. Starship can deliver 150 tons to orbit, a number that is planned to increase substantially.

The thing about space is that the potential is infinite, but it only becomes possible to start doing stuff once you get launch costs really low. Falcon 9 has brought launch costs down by orders of magnitude, but most people don't even realize this because unless you're a giant telecoms company or something, then $2000/kg doesn't sound that different than $50,000/kg --- wayyyyy too expensive for anything. But now imagine a world where you could launch things for $10/kg. Suddenly the entire universe opens up to expansion and exploitation, and life as we know it would basically change overnight.

concurrentsquar · a year ago
Calculating the cost per kilogram for LEO with Starship gives me a new startup idea: small business (or even personal) interplanetary postal service.

It only costs $150 per kg in the near future to send objects into space with Starship; so I could, for example, send a Raspberry Pi (47 grams) into LEO for ~7 dollars (as long as I also had 149 tons of other objects from other people to send). A more useful use case would sending fully automated manufacturing facilities (probably either for semiconductors (https://www.nasa.gov/general/the-benefits-of-semiconductor-m...) or crystals (https://uofuhealth.utah.edu/newsroom/news/2017/07/proteinxl))

concurrentsquar commented on 100k Stars   stars.chromeexperiments.c... · Posted by u/sans_souse
concurrentsquar · a year ago
Great visualization, though (ironically, as one of the first Chrome experiments) the music no longer works on Chrome by default (go to site settings > sound and set it to "Allow" to hear it), and it is somewhat outdated now (for example, it states that no exoplanets have been discovered orbiting Proxima Centauri (and that the 'proposed' JWST is required to find these planets)).
concurrentsquar commented on GPT-4.5 or GPT-5 being tested on LMSYS?   rentry.co/GPT2... · Posted by u/atemerev
summerlight · a year ago
At this moment, there's no real world benchmark at scale other than lmsys. All other "benchmarks" are merely sanity checks.
concurrentsquar · a year ago
OpenAI could either hire private testers or use AB testing on ChatGPT Plus users (for example, oftentimes, when using ChatGPT, I have to select between 2 different responses to continue a conversation); both are probably much more better (in many aspects: not leaking GPT-4.5/5 generations (or the existence of a GPT-4.5/5) to the public at scale and avoiding bias* (because people probably rate GPT-4 generations better if they are told (either explicitly or implicitly (eg. socially)) it's from GPT-5) to say the least) than putting a model called 'GPT2' onto lmsys.

* While lmsys does hide the names of models until a person decides which model generated the best text, people can still figure out what language model generated a piece of text** (or have a good guess) without explicit knowledge, especially if that model is hyped up online as 'GPT-5;' even a subconscious "this text sounds like what I have seen 'GPT2-chatbot' generate online" may influence results inadvertently.

** ... though I will note that I just got a generation from 'gpt2-chatbot' that I thought was from Claude 3 (haiku/sonnet), and its competitor was LLaMa-3-70b (I thought it was 8b or Mixtral). I am obviously not good at LLM authorship attribution.

concurrentsquar commented on GPT-4.5 or GPT-5 being tested on LMSYS?   rentry.co/GPT2... · Posted by u/atemerev
numlocked · a year ago
I'm seeing this as well. I don't quite understand how it's doing that in the context of LLMs to date being a "next token predictor". It is writing code, then adding more code in the middle.
concurrentsquar · a year ago
Is it something similar to beam search (https://huggingface.co/blog/how-to-generate#beam-search) or completely different (probably is not beam search if it's changing code in the middle of a block)?

(I can't try right now because of API rate limits)

concurrentsquar commented on GPT-4.5 or GPT-5 being tested on LMSYS?   rentry.co/GPT2... · Posted by u/atemerev
skissane · a year ago
gpt2-chatbot is not the only "mystery model" on LMSYS. Another is "deluxe-chat".

When asked about it in October last year, LMSYS replied [0] "It is an experiment we are running currently. More details will be revealed later"

One distinguishing feature of "deluxe-chat": although it gives high quality answers, it is very slow, so slow that the arena displays a warning whenever it is chosen as one of the competitors

[0] https://github.com/lm-sys/FastChat/issues/2527

concurrentsquar · a year ago
> One distinguishing feature of "deluxe-chat": although it gives high quality answers, it is very slow, so slow that the arena displays a warning whenever it is chosen as one of the competitors

Beam search or weird attention/non-transformer architecture?

concurrentsquar commented on GPT-4.5 or GPT-5 being tested on LMSYS?   rentry.co/GPT2... · Posted by u/atemerev
MyFirstSass · a year ago
Weird, it doesn't seem to have any info on reddit users or their writings. I tried asking about a bunch, also just about general "legendary users" from various subreddits and it seemingly just hallucinated.
concurrentsquar · a year ago
Reddit may have told OpenAI to pay (probably a lot of) money to legally use Reddit content for training, which is something Reddit is doing with other AI labs (https://www.cbsnews.com/news/google-reddit-60-million-deal-a... ); but GPTBot is not banned under the Reddit robots.txt (https://www.reddit.com/robots.txt).

This is assuming that lmsys' GPT-2 is retained GPT-4t or a new GPT-4.5/5 though; I doubt that (one obvious issue: why name it GPT-2 and not something like 'openhermes-llama-3-70b-oai-tokenizer-test' (for maximum discreetness) or even 'test language model (please ignore)' (which would work well for marketing); GPT-2 (as a name) doesn't really work well for marketing or privacy (at least compared to the other options)).

Lmsys has tested models with weird names for testing before: https://news.ycombinator.com/item?id=40205935

u/concurrentsquar

KarmaCake day95June 15, 2023View Original