Readit News logoReadit News
logiduck commented on Vibrating vests translate music for deaf concertgoers (2023)   techxplore.com/news/2023-... · Posted by u/sargstuff
CivBase · a year ago
I've never understood this kind of thing. My immediate impression - as someone with hearing who has never tried one of these vests - is not that deaf people are being included, but rather they're being given an entirely separate experience. It's almost like if I went to a Taylor Swift concert but brought my headphones and listened to Bon Jovi the whole time.

How do you "translate" music into vibrations while preserving the feeling created by the original work? Do these vests actually create a similar experience for deaf people or are they just something novel to occupy themselves with while everyone else is listening to the music?

Not trying to be cynical here. I'm genuinely curious if anyone can speak to this from experience.

logiduck · a year ago
> How do you "translate" music into vibrations

Is there anything to translate? Music is vibrations to begin with.

logiduck commented on The argument over a long-standing autism intervention   newyorker.com/science/ann... · Posted by u/jyunwai
bayesianbot · 2 years ago
The argument against ABA is exactly that it looks like it works from the perspective of the (neurotypical) parents, but the kids feel like they're basically tortured until they behave like their parents and society expects them to. To be something they aren't, be forced in to a box where you just don't fit, until they do.

I haven't gone through official ABA therapy but I had extremely ABA-like childhood and I'd have to agree. When I was kid and didn't know what's different about me I used to think at least there's probably millions like me who don't have to go through that. When I found out about my autism and later ABA, I was horrified to know it's official therapy. But I'm sure there's autistic people who'd disagree with me.

logiduck · 2 years ago
I have had exposure to ABA but mostly through the lens of the parents with children using ABA and it was mostly positive. Being able to take their kids out and put them in the car for the first time without a breakdown after starting therapy was a major achievement in their eyes.

I am just curious though about what you limits are? Isn't everyone being put into a box? Isn't that just being part of society? I don't know what your experiences are, but probably isn't there a spectrum of ABA from good to bad just like there is a spectrum of all types of interventions and parenting from being overbearing to too lenient?

Just genuinely interested in that it seems like being part of society unless you are part of the 1% has a big aspect of conformity and "fitting in" to society even if that isn't what you want to do. That historically to me has just been known as growing up.

logiduck commented on The argument over a long-standing autism intervention   newyorker.com/science/ann... · Posted by u/jyunwai
logiduck · 2 years ago
There is a lot of money from Silicon Valley rushing into ABA treatment and having seen the results I can't help but think we do better.

Current systems of ABA rely on billing the therapy to insurance and thus most of these ABA "mills" just throw inexperienced college kids who haven't graduated yet at the problem on a session by session basis paid at minimum wage.

These college-aged therapists have no interaction with each other since they go directly to the session at the client's house, leave and go home to take notes. Sometimes they switch patients quite regularly so there is never a chance for a comprehensive plan to emerge.

The insurance only pays for treating the patient and thus no space is available for training the parents. If we want to do better, we have to start with the parents and develop therapies that last the whole week not just 2 hours a week.

Theres a lot of potential for improving the system, but the current ABA system that SV is taking over focuses on extremely short sighted "box checking" and does nothing to develop a long term approach for the family.

logiduck commented on Drowning in code: The ever-growing problem of ever-growing codebases   theregister.com/2024/02/1... · Posted by u/pseudolus
throwuwu · 2 years ago
If you have an AI capable of writing machine code based on natural language you likely also have an AI that can translate that machine code to any other language you would like. You could then use a normal compiler to verify if it is correct and then read the code yourself. Or you could just get good at reading and writing assembly.
logiduck · 2 years ago
Yes, that is exactly my point. You will have to rely on AI to translate it back out, but that translation is built on probabilities not machine rule-based translation. So you can ask and have the AI explain everything to you, but you are still trusting the "black box" to tell you what is happening. Very different from today.

Also, you can "get good" at reading assembly, but that doesn't matter if the AI can output a custom OS from scratch and a custom VM to execute the program it wrote to solve your use case. It will be so impossibly complex that it would be the equivalent studying protein folding.

Instead people will just trust the AI.

It also won't help you if the code base the AI produces for a SaaS app is a million lines of assembly.

Instead of having different layers of OS, compiler, high level language, an AI will just be able to produce one layer. because after decades of trusting the AI to write our code, why wouldn't it?

The current gen of AI outputting code that in human-centric programming languages will be a blip in the history of AI. As it advances, it can just skip that step.

Its will be orders of magnitude more complex and opaque than anything we have today.

logiduck commented on Drowning in code: The ever-growing problem of ever-growing codebases   theregister.com/2024/02/1... · Posted by u/pseudolus
throwuwu · 2 years ago
This already happens when you compile code for a specific architecture. The thing you fear will happen has already happened and we’re fine.

I’d worry more about generated code from an AI that doesn’t fully understand the codebase. That would be as bad as letting the junior devs run loose.

logiduck · 2 years ago
Not really at all.

What you are describing is an traceable transformation of code in which there are several intermediate layers that people can inspect and understand. They can inspect the exact rules for that transformation. The process is repeatable and verifiable.

What I am describing is a black-box stochastic generation of low level code in which there is no higher level representation anymore. AI generating Assembly not by a set of rules, but using statistics. There will be no individual layers to unwind or inspect, because for AI it doesn't need them. Our separation of concerns was built for our human brains and limiting complexity of projects to our understanding.

logiduck commented on Drowning in code: The ever-growing problem of ever-growing codebases   theregister.com/2024/02/1... · Posted by u/pseudolus
javier_e06 · 2 years ago
The article intersects very well with AI tools poised to analyze the code and try to understand it for us. Software tools already exist to deal with large code bases, for example, right now, we use software tools (valgrind, coverity) to help us understand the code. Lead architects use analysis tools as well as developers eyeballs to vet the code. Soon enough an AI tool will guide developers on understanding their own codebase and manage it. The same goes along with infrastructure, ChatGPT can guide you right now with examples on how to create a AWS service from scratch.

So yes, the codebase is growing fast and with it the tools to manage it.

logiduck · 2 years ago
We may have the tools to manage it, but we are losing the ability to understand it.

AI writing software will be a exponential explosion in software complexity.

AI would very well create its own programming language to be more efficient for the AI to code in that we have no hopes of understanding. Imagine that AI started to output a large SaaS app written today in Python in Assembly because for the AI the extra cognitive overhead of using Assembly doesn't exist. At first we might resist and tell it to use a language we understand, but then as time goes on we grow more comfortable that it does the "right" thing and down the road people are just generated raw Assembly using AI without really understanding what the code is doing, only looking to see if the code behaves the way they expect.

Imagine entire codebases spun up in seconds with so many lines of code, a single person would never have hopes of understanding everything, needing to rely on AI to summarize and explain the code for them. Now imagine that massive code base being iterated and worked on for a decade over the life of a company.

AI could bring Terabyte sized code bases in a decade or so.

logiduck commented on OpenAI: Copy, Steal, Paste   computerworld.com/article... · Posted by u/CrankyBear
corethree · 2 years ago
Technically it's not a copy. And nothing was stolen. It's a best fit curve among a series of datapoints. The datapoint is a copy, if the best fit curve never touches the datapoint then it's technically not a copy.

The difficult part is the technicality here is legal and ethical from any standpoint. The high level ramifications are a bit unfair in the sense that yes the data is being used to create an AI that can replace you and your job from data you created to DO your job.

This is the conundrum with AI. Our legal system and moral intuition makes it permissible to read and interpret things which is pretty much all ML does. Does it make sense to make it illegal for someone to learn programming for free from online articles? No. Does it make sense to make it illegal for someone to learn programming for free from online articles to take over your job? No. Should it be illegal to use tools to help me learn programming for free by using tools to help me read for free from online articles so I can take over your job? No.

But if that tool is AI, suddenly, Yes, it should be illegal. The logic makes no sense. But we shouldn't rely on logic to maintain our morals, even if it doesn't hold logical cohesion if the outcomes are negative it should still be immoral imo.

logiduck · 2 years ago
Copyright laws do not support your argument.

There have been many cases in music where the offending song was forced to pay because it was "close enough" to the curve but not touching it.

logiduck commented on OpenAI: Copy, Steal, Paste   computerworld.com/article... · Posted by u/CrankyBear
logiduck · 2 years ago
Of course there is fair use, etc etc. But modern copyright was a reaction to the printing press so that people were motivated to still create content in the face of new technology.

If current copyright laws do not protect people from creating something new because of fear that they will be ripped off, then new copyright laws need to come.

One thing I wondered about how arguments from one side of the issue say that AI copying and extracting information for free isn't stealing, but what if you use their argument against things that aren't copyright like secrets, military, corporate, trade secrets.

Like can if an LLM saw the coca-cola forumla and the weights are released, what are the consequences? If it ingested top secret confidential information and released the weights, I assume that counts as stealing something and distributing it.

logiduck commented on Why is machine learning 'hard'? (2016)   ai.stanford.edu/~zayd/why... · Posted by u/jxmorris12
epistasis · 2 years ago
Love that thread. The top comment is excellent:

> Like picking hyperparamters - time and time again I've asked experts/trainers/colleagues: "How do I know what type of model to use? How many layers? How many nodes per layer? Dropout or not?" etc etc And the answer is always along the lines of "just try a load of stuff and pick the one that works best".

> To me, that feels weird and worrying. It's like we don't yet understand ML properly yet to definitively say, for a given data set, what sort of model we'll need.

This embodies the very fundamental difference between science and engineering. With science, you make a discovery, but rarely do we ask "what was the magical combination that let me find the needle in the haystack today?" We instead just pass on the needle and show everyone we found it.

Should we work on finding out the magic behind hyperparameters? In bioinformatics, the brilliant mathematician Lior Pachter once attacked the problem of sequence alignment using the tools of tropical algebra: what parameters to the alignment algorithms resulted in which regimes of solutions? It was beautiful. It was great to understand. But I'm not sure if it even ever got published (though it likely did). Having reasonable parameters is more important than understanding how to pick them from first principles, because even if you know all the possible output regimes for different segments of the hyper parameter space, really the only thing we care about is getting a functionally trained model at the end.

Sometimes deeper understanding provides deeper insights to the problems at hand. But often, they don't, even when the deeper understanding is beautiful. If the hammer works when you hold it a certain way, that's great, but understanding all possible ways to hold a hammer doesn't always help get the nail in better.

logiduck · 2 years ago
Yes, this makes it very difficult to apply ML and RL in non-simulated scenarios.

With simulated scenarios you can just replay and "sweep" across hyperparameters to find the best one.

In a realworld scenario with limited information, fine tuning hyperparameters is much harder as you quickly find yourself in local maxima.

logiduck commented on ElevenLabs lands $80M from a16z and Sequoia – and becomes a unicorn   sifted.eu/articles/eleven... · Posted by u/hhs
logiduck · 2 years ago
Seems like the Moat they might have is licensing is if they start getting celebrities or studios to license their tech. Very big market for animation studios to have this tech.

If they were the shop where you go to get a famous person's or character's voice I could see it reaching this valuation.

But as a pure tech play, seems likely that in 2-3 years the open source models will be good enough to not need an established player.

If you go on Fiverr and ask for a text narration, most of the people on there just use elevenlabs or a similar service and aren't actually doing the narration themselves but essentially just licensing their voice.

u/logiduck

KarmaCake day205December 27, 2023View Original