Readit News logoReadit News
pizza commented on Claude Composer   josh.ing/blog/claude-comp... · Posted by u/coloneltcb
altmanaltman · 4 days ago
> What's the point if human-made art isn't interesting or artistically worthwhile?

Because it is a human making it, expressing something is always worthwhile to the individual on a personal level. Even if its not "artisticallly worthwhile", the process is rewarding to the participant at the very least. Which is why a lot of people just find enjoyment in creating art even if its not commercially succesful.

But in this case, the criteria changes for the final product (the music being produced). It is not artistically worthwhile to anyone, not even the creator.

So no, a person with no talent (self claim) using an LLM to create art is much less worthwhile than a human being with no/any talent creating art on their own at all times by default.

pizza · 4 days ago
I think you're mistaking the .wav as the final product, whereas instead it's really the .html blog post and this discussion.
pizza commented on Starlink users must opt out of all browsing data being used to train xAI models   twitter.com/cryps1s/statu... · Posted by u/pizza
crimsonnoodle58 · 22 days ago
So, as most of the web is HTTPS now they have DNS requests (if users haven't used a third party DNS like 1.1.1.1), and IP addresses. Maybe the SNI domain name if they are doing packet inspection.

Not really sure how useful this would be on model training?

Maybe ranking which sites it should give as answers based on popularity?

pizza · 22 days ago
This x thread may not be the best source of clarity on what is actually being default opted-into. Sorry. I looked into it and it seems that Starlink denies browsing history would be shared [0]. Seems I can't edit the title any more.

> Do you share my personal information for AI training? We are committed to protecting your privacy. In some instances, we may share personal information with trusted third-party partners who, among other activities, help us develop AI-enabled tools that improve your customer experience, although you can always opt out. Rest assured that we take reasonable safeguards to protect and secure your information whenever it is used or shared.

> Will these AI models see my Internet history? No, your internet history will never be shared with AI models, including individual browsing habits or geolocation tracking, and we comply with laws prohibiting unauthorized surveillance.

> What personal information does Starlink collect from me? We only collect what’s needed to provide you great service—like your name, address, email, and payment details when you sign up or order. We also gather some technical information (like IP address or service performance data) to keep your connection fast and reliable.

[0] https://starlink.com/support/article/b82cf54a-8e57-917a-bd06...

pizza commented on Inside The Internet Archive's Infrastructure   hackernoon.com/the-long-n... · Posted by u/dvrp
lysace · 25 days ago
Do you really think that is a good argument against the perception of technical stagnation?
pizza · 25 days ago
That sounds really entitled.
pizza commented on The unreasonable effectiveness of the Fourier transform   joshuawise.com/resources/... · Posted by u/voxadam
seba_dos1 · a month ago
The unreasonable effectiveness of considering something harmful.
pizza · a month ago
Lies, Damned lies, and Unreasonable Effectiveness

Deleted Comment

pizza commented on The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks (2018)   arxiv.org/abs/1803.03635... · Posted by u/felineflock
observationist · a month ago
Neural networks are effectively gauge invariant, and you have a huge space of valid isomorphisms as far as possible "valid" layer orderings go, and if your network is overparameterized, the space of "good enough" approximations gets correspondingly larger. The good enough sets are a sort of fuzzy gauge quotient approximating some "ideal" function per layer or cluster or block (depending on your optimizer and architecture.)

https://arxiv.org/html/2506.13018v2 - Here's an interesting paper that can help inform how you might look at networks, especially in the context of lottery tickets, gauge quotients, permutations, and what gradient descent looks like in practice.

Kolmogorov Arnold Networks are better about exposing gauge symmetry and operating in that space, but aren't optimized for the hardware we have - mechinterp and other reasons might inspire new hardware, though. If you know what your layer function should look like, if it were ordered such that it resembled a smooth spline, you could initialize and freeze the weights of that layer, and force the rest of the network to learn within the context of your chosen ordering.

The number of "valid" configurations for a layer is large, especially if you have more neurons in the layer than you need, and the number of subsequent layer configurations is much larger than you'd think. The lottery ticket hypothesis is just circling that phenomenon without formalizing it - some surprisingly large percentage of possible configurations will approximate the function you want a network to learn. It doesn't necessarily gain you advantages in achieving the last 10% , and there could be counterproductive configurations that collapse before reaching an optimal configuration.

There are probably optimizer strategies that can exploit initializations of certain types, for different classes of activation functions, and achieve better performance for architectures - and all of those things are probably open to formalized methods based on existing number theory around gauge invariant systems and gauge quotients, with different layer configurations existing as points in gauge orbits in hyperdimensional spaces.

It'd be really cool if you could throw twice as many neurons as you need into a model, randomly initialize a bunch of times until you get a winning ticket, then distill the remainder down to your intended parameter count, and train from there as normal.

It's more complex with architectures like transformers, but you're not dealing with a combinatorial explosion with the LTH - more like a little combinatorial flash flood, and if you engineer around it, it can actually be exploited.

pizza · a month ago
Yes to this. Furthermore:

- you can solve neural networks in analytic form with a hodge star approach* [0]

- if you use a picture to set your initial weights for your nn, you can see visually how close or far your choice of optimizer is actually moving the weights - eg non-dualized optimizers look like they barely change things whereas dualized Muon changes the weights much more to the point you cannot recognize the originals [1]

*unfortunately, this is exponential in memory

[0] M. Pilanci — From Complexity to Clarity: Analytical Expressions of Deep Neural Network Weights via Clifford's Geometric Algebra and Convexity https://arxiv.org/abs/2309.16512

[1] https://docs.modula.systems/examples/weight-erasure/

pizza commented on Australia begins enforcing world-first teen social media ban   reuters.com/legal/litigat... · Posted by u/chirau
ricardobeat · 2 months ago
Adults love 'garbage'. How do you define that?

There is also the problem that making platforms responsible for policing user-generated content 1) gives them unwanted political power and 2) creates immense barriers to entry in the field, which is also very undesireable.

pizza · 2 months ago
I have no idea how to define it. I also don’t know if I’m personally convinced one way or another about the harms. Just think the platforms would probably have to be made to make more substantial changes were it the case.

u/pizza

KarmaCake day14168August 13, 2009
About
Things that I think are cool:

- algorithmic ethics / praxeology meets algorithms

- algebraic topology

- reinforcement learning

- AGI

- neuroscience (as a true science but also its abuse as pop phrenology)

- information theory applied to mental health and society

- trustless/trustful collaborative systems, zero-knowledge proofs, differential privacy

- alternatives to capitalism

- software defined radio

- decentralized/localizable tech

- music production

- weight lifting

- the minimization of negative externalities, and the maximization of positive externalities

- compressed sensing

- effective methods of dealing with stress, and information overload!

- capitalist realism as a byproduct of information theory

things that I think would be cool if they existed:

- computational metaphysics

- 'paint-able proofs'

- an IDE where the computer is the user of the IDE and the human simply guides it through tough corner cases

- containerized, cloud-based digital audio workstations, a la gitpod or github codespaces

email: (my username).on.hn@gmail.com

View Original