Readit News logoReadit News
lpasselin commented on Your Job used to impress people. That era just ended   carmenvankerckhove.substa... · Posted by u/lordleft
lpasselin · 6 months ago
A lawyer can become a locksmith in a few months (weeks? days?). If the flip mentioned is really happening, isn't it temporary?
lpasselin commented on Harnessing the Universal Geometry of Embeddings   arxiv.org/abs/2505.12540... · Posted by u/jxmorris12
lpasselin · 7 months ago
Hey, I read the paper in detail and presented to colleagues during our reading group.

I still do not understand exactly where D1L comes from in LGan(D1L, T(A1(u)). Is D1L simply A1(u)?

I also find that mixing notation in figure 2 and 3 makes it tricky.

Would have loved to have more insights from the results in the tables.

And more results from inversion, on more than Enron dataset. Since that is one end goals, even if reusing another method.

Thank you for the paper, very interesting!

lpasselin commented on First images from Euclid are in   dlmultimedia.esa.int/down... · Posted by u/mooreds
lpasselin · a year ago
Do the captured elements move a lot during this snapshot? since it will take months? Is the difference significant?
lpasselin commented on Efficient high-resolution image synthesis with linear diffusion transformer   nvlabs.github.io/Sana/... · Posted by u/Vt71fcAqt7
lpasselin · a year ago
This comes from the same group as the EfficientViT model. A few months ago, their EfficientViT model was the only modern and small ViT style model I could find that had raw pytorch code available. No dependencies to the shitty framework and libraries that other ViT are using.
lpasselin commented on The Intelligence Age   ia.samaltman.com/... · Posted by u/firloop
lpasselin · a year ago
Custom and _competent_ AI tutors will be a game changer for education.
lpasselin commented on Amazon's Secret Weapon in Chip Design Is Amazon   spectrum.ieee.org/amazon-... · Posted by u/mdp2021
rytill · a year ago
AWS is so anti-customer with respect to GPUs right now.

They have the highest prices of any cloud. What happened to “your margin is my opportunity”?

And, as far as I know, customers are unable to allocate a VM with fewer than eight A100, H100, or H200 GPUs. (Please tell me how if I’m wrong.)

So, customers are incentivized to use other cloud products for GPUs in the short term.

They seem to be heavily invested in their own chips in the medium term.

lpasselin · a year ago
Meanwhile, I had a hard time last week getting a machine with 8 gpus from azure.
lpasselin commented on DiyPresso: DIY Espresso Machine   diypresso.com/... · Posted by u/ragebol
drrotmos · a year ago
The more I think about the more I feel that this is the wrong solution to the problem. Disclaimer: I'm doing a small open source espresso controller project, check it out if you're interested, but it's not ready for prime time yet: https://github.com/variegated-coffee.

My thinking is that this machine appeals mostly to people who already has an espresso machine. It's not particularly technologically advanced. It's a single boiler, an E61 group and a vibratory pump. If you're buying this machine, you're probably replacing a machine at a similar technology level, and that's not really a sustainable choice.

A well maintained espresso machine has a lifespan in the range of decades. Many recent innovations in espresso machines is mostly controllers, sensors and actuators. Also better pumps. These are all things that can easily be retrofitted to an older espresso machine.

There has been innovation in other areas not easily retrofittable (saturated groups, dual boilers instead of heat-exchangers, to name a few), but this machine doesn't really feature any of those.

I strongly believe that in this particular demographic, it's a much better (more sustainable, cheaper and all around more fun) idea to retrofit new and advanced parts to the espresso machine they presumably already have, than to buy a whole new machine. We don't need old espresso machines on landfills.

On the off chance that a prospective buyer doesn't already have a similar espresso machine, this isn't too bad of a choice, and the price is decent, but on the other hand, there are a lot of used machines on the market that are looking for a new owner and can be upgraded.

lpasselin · a year ago
I have a 200$ mini machine and would like to upgrade. Can DIY anything. What would you suggest for a maximum budget of 1500$?
lpasselin commented on Mamba: The Easy Way   jackcook.com/2024/02/23/m... · Posted by u/jackcook
Der_Einzige · 2 years ago
Very annoying namespace conflict since a package called "mamba" (faster reimplementation of the python conda package manager) already existed for awhile before this architecture was even dreamed up.

https://github.com/mamba-org/mamba

Beyond that, I'll care about an alternative to transformers when it shows superior performance with an open source 7b-34b model compared to transformer model competitors. So far this has not happened yet

lpasselin · 2 years ago
The mamba paper shows significant improvements in all model sizes, up to 1b, the largest one tested.

Are there any reason why it wouldn't scale to 7b or more? Have they tried it?

lpasselin commented on New embedding models and API updates   openai.com/blog/new-embed... · Posted by u/Josely
minimaxir · 2 years ago
To compare with the MTEB leaderboard (https://huggingface.co/spaces/mteb/leaderboard), the new embedding models are on par with open-source embedding models like BAAI/bge-large-en-v1.5, not a drastic improvement if already using them. Obviously, a cost/performance improvement is still good.

I've found evidence that the OpenAI 1536D embeddings are unnecessairly big for 99% of use cases (and now there's a 3072D model?!) so the ability to reduce dimensionality directly from the API is appreciated for the reasons given in this post. Just chopping off dimensions to an arbitrary dimensionality is not a typical dimensionality reduction technique so that likely requires a special training/alignment technique that's novel.

EDIT: Tested the API: it does support reducing to an arbitrary number of dimensions other than the ones noted into the post. (even 2D for data viz, but may not be as useful since the embeddings are normalized)

The embeddings aren't "chopped off", the first components of the embedding will change as dimensionality reduces, but not much.

lpasselin · 2 years ago
Most of the leaderboard has much lower sequence length.
lpasselin commented on PhotoPrism: AI-powered photos app for the decentralized web   github.com/photoprism/pho... · Posted by u/pretext
lpasselin · 2 years ago
Last time I tried, it did not have a good android app working.

u/lpasselin

KarmaCake day175September 25, 2016
About
[ my public key: https://keybase.io/lpasselin; my proof: https://keybase.io/lpasselin/sigs/YWIITO_iIUEVT5paUBS-NgUR_GZaLG167gL3Rh3Aagg ]

https://asselin.engineer

View Original