Readit News logoReadit News
hallh commented on Tell HN: Azure outage    · Posted by u/tartieret
cyptus · 2 months ago
AFD is down quite often regionally in Europe for our services. In 50%+ the cases they just don‘t report it anywhere, even if its for 2h+.
hallh · 2 months ago
Same experience. We've recently migrated fully away from AFD due to how unreliable it is.
hallh commented on Denmark shuts multiple airports, more unidentified drones spotted   msn.com/en-us/news/world/... · Posted by u/hallh
elric · 3 months ago
So are these Russian drones? Or perhaps protests against their pushing of Chat Control?
hallh · 3 months ago
So far, the police have only confirmed that they are non-commercial drones. The Ministry of Defence is hosting a press conference about it shortly.
hallh commented on AGI is not multimodal   thegradient.pub/agi-is-no... · Posted by u/danielmorozoff
nemjack · 7 months ago
For sure, we can't process images the same way that we process sound, but the author argues for processing images and text the same, and text is fundamentally a visual medium of communication. The author makes a good point about how VLMs can still struggle to determine the length of a word, or generate words that start and end with specific letters, etc. which is an indicator that an essential aspect of a modality (its visual aspect) is missing from how it is processed. Surely a unified visual process for text and image would not have such failure points.

I agree that modality specific processing is very shallow at this point, but it still seems not to respect the physicality of the data. Today's modalities are not actually akin to human senses because they should be processed by a different assortment of "sense" organs, e.g. one for things visual, one for things audible, etc.

hallh · 7 months ago
I don't think you can classify reading as a purely visual modality, despite being a visual medium. People with dislexia may see perfectly fine, but only the translation layer processing the text gets jumbled. Granted, we are not born with the ability to read, so that translation layer is learned. On the other hand, we don't perceive everything in our visual field either, magicians and youtube videos use this limitation to trick and entertain us, and these we are presumably born with, given that its a shared human trait. Evidently, some of the translation layers involved with processing our vision were seemingly evolved naturally and are part of our brains, so why would we not allow artificial intelligence similar advance starting points for processing data?
hallh commented on Notes on rolling out Cursor and Claude Code   ghiculescu.substack.com/p... · Posted by u/jermaustin1
hallh · 8 months ago
Having linting/prettifying and fast test runs in Cursor is absolutely necessary. On a new-ish React Typescript project, all the frontier models insist on using outdated React patterns which consistently need to be corrected after every generation.

Now I only wish for an Product Manager model that can render the code and provide feedback on the UI issues. Using Cursor and Gemini, we were able to get a impressively polished UI, but it needed a lot of guidance.

> I haven’t yet come across an agent that can write beautiful code.

Yes, the AI don't mind hundreds of lines of if statements, as long as it works it's happy. It's another thing that needs several rounds of feedback and adjustments to make it human-friendly. I guess you could argue that human-friendly code is soon a thing of the past, so maybe there's no point fixing that part.

I think improving the feedback loops and reducing the frequency of "obvious" issues would do a lot to increase the one-shot quality and raise the productivity gains even further.

hallh commented on Show HN: Fermi – A Wordle-style game for order-of-magnitude thinking   fermi-game.andrewnoble.me... · Posted by u/andrewrn
hallh · 9 months ago
The idea has potential, but needs polishing. When I read the tutorial, I thought that I had to guess the target number using completely unrelated values. Was a bit disappointed to see the "options" were obvious and not really optional, and the scales too limited. Seemed too easy. It would be more entertaining to guess the number of people currently in flight based on x number of full football stadiums, the avg number of eggs laid by y hens per month, etc.

A visual queue of reaching within the success range would do a lot too. Was a bit confused whether I was right or not after submitting the answer.

hallh commented on Let's talk about AI and end-to-end encryption   blog.cryptographyengineer... · Posted by u/chmaynard
peppertree · a year ago
Is embeddings enough to preserve privacy? If I run the encoder/decoder on device and only communicate with server in embeddings?
hallh · a year ago
No, the original text can largely be recovered from embeddings[0] if you know which embedding model was used.

[0] https://arxiv.org/abs/2310.06816

hallh commented on AI-powered music scam nets musician $10M in royalties–and federal charges   arstechnica.com/informati... · Posted by u/pseudolus
para_parolu · a year ago
I thought spotify pays per time listening. This way users pick who gets money
hallh · a year ago
It's not. The total revenue available for royalties is distributed to rightsholders by the volume they occupy of total plays in a given timeframe[1]. It's been a hot topic for a long time, as your subscription is not going to the artists you listen too only. Huge pop hits that are responsible for X percent of plays across all Spotify streams, will receive X percent of your paid subscription, even though you as a subscriber have never listened to them.

It's a simplification, since 100% of an individual's subscription doesn't go to directly to royalties, it's after expenses.

Smaller artists are frustrated with this model as it favours the big artists.

[1] https://support.spotify.com/us/artists/article/royalties/

Edit: added the link.

u/hallh

KarmaCake day31June 7, 2021View Original