Readit News logoReadit News
Posted by u/_o1yi a month ago
Show HN: I used AI to recreate a $4000 piece of audio hardware as a plugin
Hi Hacker News,

This is definitely out of my comfort zone. I've never programmed DSP before. But I was able to use Claude code and have it help me build this using CMajor.

I just wanted to show you guys because I'm super proud of it. It's a 100% faithful recreation based off of the schematics, patents, and ROMs that were found online.

So please watch the video and tell me what you think

https://youtu.be/auOlZXI1VxA

The reason why I think this is relevant is because I've been a programmer for 25 years and AI scares the shit out of me.

I'm not a programmer anymore. I'm something else now. I don't know what it is but it's multi-disciplinary, and it doesn't involve writing code myself--for better or worse!

Thanks!

franky47 · a month ago
I used to do that exact job 10 years ago (without AI, obviously). I figure that career would be very different now.

There was something exciting about sleuthing out how those old machines worked: we used a black box approach, sending in test samples, recording the output, and comparing against the digital algorithm’s output. Trial and error, slowly building a sense of what sort of filter or harmonics could bend a waveform one way or another.

I feel like some of this is going to be lost to prompting, the same way hand-tool woodworking has been lost to power tools.

mycall · a month ago
It will be the future for sure, software as a tool for everyone.

While there is something lost in prompting, people will always seek out first-principles so they can understand what they are commanding and controlling, especially as old machines become new machines with new capabilities not even imaginable before due to the old software complexity wall.

_o1yi · a month ago
It's exactly as you say: software as a tool for everyone and it's hard for programmers like me to accept that because I've spent so much time, read so many books, and work so hard perfecting my craft.

But smart programmers will realize the world doesn't care about any of that at all.

ceva · a month ago
I think you will like this talk https://youtu.be/XM_q5T7wTpQ?si=Nyb4lZEZjsjCCGBg

Deleted Comment

rmnclmnt · a month ago
I wonder if we could then have released the *stressor in a few months then...
franky47 · a month ago
I’d love to see someone try.

Though using AI to build the devtools we used for signal analysis would have been helpful.

hebejebelus · a month ago
I was hoping that the video was a walkthrough of your process - do you think you might share that at some point?

> I'm not a programmer anymore. I'm something else now. I don't know what it is but it's multi-disciplinary, and it doesn't involve writing code myself--for better or worse!

Yes, I agree. I think the role of software developer is going to evolve into much more of an administrative, managerial role, dealing more with working with whatever organisation you're in than actually typing code. Honestly I think it probably was always heading in this direction but it's definitely quite a step change. Wrote about it a little incoherently on my blog just this morning: https://redfloatplane.lol/blog/11-2025-the-year-i-didnt-writ...

askonomm · a month ago
As someone who works at a place where we do a lot of code analysis and also research AI's effect on code quality, if you do not even so much as look at your code anymore, I do not believe you are creating maintainable, quality software. Maybe you don't need to, or care to, but it's definitely not what's sustainable in long-term product companies.

AI is a force multiplier - it makes bad worse, it _can_ make good better. You need even more engineering disciplines than before to make sure it's the latter and not the former. Even with chaining code quality MCP's and a whole bunch of instructions in AGENTS.md, there's often a need to intervene and course adjust, because AI can either ignore AGENTS.md, or because whatever can pass code quality checks does not always mean the architecture is something that's solid.

That being said, I do agree our job is changing from merely writing code, to more of a managerial title, like you've said. But, there's a new limit - your ability to review the output, and you most definitely should review the output if you care about long-term sustainable, quality software.

agentifysh · a month ago
6 months ago I agreed with your statement

but AI being solely a force multiplier is not accurate, it is a intelligence multiplier. There are significantly better ways now to apply skills and taste with less worry about technical debt. AI coding agents have gotten to the point that it virtually removes ALL effort barrierrs even paying off technical debt.

While it is still important to pay attention to the direction your code is being generated, the old fears and caution we attributed to previous iteration of AI codegen is largely being eroded and this trend will continue to the point where our "specialty" will no longer matter.

I'm already seeing small businesses that laid off their teams and the business owner is generating code themselves. The ability to defend the thinning moat of not only software but virtually all white collar jobs is getting tougher.

williamcotton · a month ago
> if you care about long-term sustainable, quality software

If software becomes cheaper to make it amortizes at a higher rate, ie, it becomes less valuable at a faster clip. This means more ephemeral software with a shorter shelf-life. What exactly is wrong with a world where software is borderline disposable?

I’ve been using Photoshop since the 90s and without having watched the features expand over the years I don’t think I would find the tool useful for someone without a lot of experience.

This being said, short-lived and highly targeted, less feature-full software for image creation and manipulation catered to the individual and specific to an immediate task seems advantageous.

Dynamism applied not to the code but to the products themselves.

Or something like that.

hebejebelus · a month ago
Yes, I didn't do a great job of managing my language in that post (I blame flu-brain). In the case where _someone_ is going to be reading the code I output, I do review it and act more as the pilot-not-flying rather than as a passenger. For personal code (as opposed to code for a client), which is the majority of stuff that I've written since Opus 4.5 released, that's not been the case.

I'll update the post to reflect the reality, thanks for calling it out.

I completely agree with your comment. I think the ability to review code, architecture, abstractions matters more than the actual writing of the code - in fact this has really always been the case, it's just clearer now that everyone has a lackey to do the typing for them.

Deleted Comment

lifetimerubyist · a month ago
Instead of becoming a people manager you're just a bot manager. Same roles, different underlings.
Blackthorn · a month ago
How can you say it's a 100% faithful recreation if you've never programmed DSP before?
utopiah · a month ago
Indeed, same questions few days ago when somebody shared a "generated" NES emulator. We have to make this answered when sharing otherwise we can't compare.
le-mark · a month ago
At some point the llm ingested a few open source NES emulators and many articles on their architecture. So i question the llm creativity involved with these types examples. Probably also for dsps.
Xmd5a · a month ago
I’m not claiming a 100% faithful physical recreation in the strict scientific sense.

If you look at my other comment in this thread, my project is about designing proprioceptive touch sensors (robot skin) using a soft-body simulator largely built with the help of an AI. At this stage, absolute physical accuracy isn’t really the point. By design, the system already includes a neural model in the loop (via EIT), so the notion of "accuracy" is ultimately evaluated through that learned representation rather than against raw physical equations alone.

What I need instead is a model that is faithful to my constraints: very cheap, easily accessible materials, with properties that are usually considered undesirable for sensing: instability, high hysteresis, low gauge factor. My bet is that these constraints can be compensated for by a more circular system design, where the geometry of the sensor is optimized to work with them.

Bridging the gap to reality is intentionally simple: 3D-print whatever geometry the simulator converges to, run the same strain/stress tests on the physical samples, and use that data to fine-tune the sensor model.

Since everything is ultimately interpreted through a neural network, some physical imprecision upstream may actually be acceptable, or even beneficial, if it makes the eventual transfer and fine-tuning on real-world data easier.

_o1yi · a month ago
I had the hardware for both units and use them extensively so 100% familiar with how they sound.

And I'm not doing it based off of my ears. I know the algorithm, have the exact coefficients, and there was no guesswork except for the potentiometer curves and parts of the room algorithm that I'm still working out, which is a completely separate component of the reverb.

But when I put it up for sale, I'll make sure to go into detail about all that so people who buy it know what they're getting.

vunderba · a month ago
Can you sell it, or would you have to do some renaming in order to get around trademark/etc ?

Consider reaching out to Audiority - I know they have some virtual recreations of Space Station hardware.

https://www.audiority.com/shop/space-station-um282

wrl · a month ago
Are you also going to go into detail about the use of AI to generate the code?
indigodaddy · a month ago
Sell it?
glimshe · a month ago
Perhaps a subjective evaluation based on how it sounds.
steveBK123 · a month ago
It’s bold to call it 100% faithful without some rigorous test harness though, isn’t it?
gbraad · a month ago
Standard AI response. Similar to " production-ready", "according to industry standards" or "common practices" to justify and action or indicating it is done, without even compiling or running code, let alone understand the output. An AI can't hear, and even worse, relate this. Ask it to create a diode ladder filter, and it will boost it created a "physically correct analog representation" while output ting clean and pure signals...
Archit3ch · a month ago
For context, I'm working on a proper SPICE component-level Diode Ladder.

I tried this for laughs with Gemini 3 Pro. It spit out the same ZDF implementation that is on countless GitHub repos, originating from the 2nd Pirkle FX book (2019).

Dead Comment

Dead Comment

baq · a month ago
Maybe the OP has the hardware and can compare the sound both subjectively and objectively? Does it have to be 100% exact copy to be called the same? (Individual electronic components are never the same btw)
Blackthorn · a month ago
The OP didn't clarify. But if there's a claim of 100% faithful recreation, I'd expect something to back it up, like time- and frequency-domain comparisons of input and output with different test signals. Or at least something. But there isn't anything.

The video claims: "It utilizes the actual DSP characteristics of the original to bring that specific sound back to life." The author admits they have never programmed DSP. So how are they verifying this claim?

_DeadFred_ · a month ago
That might make it 100% faithful for OPs use cases, but not necessarily anyone else's.
Xmd5a · a month ago
Very nice work. I’m curious: what kinds of projects are you guys currently working on that genuinely push you out of your comfort zone?

I had a small epiphany a couple of weeks ago while thinking about robot skin design: using conductive 3D-printed structures whose electrical properties change under strain, combined with electrical impulses, a handful of electrodes, a machine-learning model to interpret the measurements, and computational design to optimize the printed geometry.

While digging into the literature, I realized that what I was trying to do already has a name: proprioception via electrical impedance tomography. It turns out the field is very active right now.

https://www.cam.ac.uk/stories/robotic-skin

That realization led me to build a Bergström–Boyce nonlinear viscoelastic parallel rheological simulator using Taichi. This is far outside my comfort zone. I’m just a regular programmer with no formal background in physics (apart from some past exposure to Newton-Raphson).

Interestingly, my main contribution hasn’t been the math. It’s been providing basic, common-sense guidance to my LLM. For example, I had to explicitly tell it which parameters were fixed by experimental data and which ones were meant to be inferred. In another case, the agent assumed that all the red curves in the paper I'm working with referred to the same sample, when they actually correspond to different conducting NinjaFlex specimens under strain.

Correcting those kinds of assumptions, rather than fixing equations, was what allowed me to reproduce the results I was seeking. I now have an analytical, physics-grounded model that fits the published data. Mullins effect: modeled. Next up: creep.

We’ll see how far this goes. I’ll probably never produce anything publishable, patentable, or industrial-grade. But I might end up building a very cheap (and hopefully not that inaccurate), printable proprioceptive sensor, with a structure optimized so it can be interpreted by much smaller neural networks than those used in the Cambridge paper.

If that works, the gesture will have been worth it.

brcmthrowaway · a month ago
I would put more effort into algotrading to make $$$ for yourself
gbraad · a month ago
Isn't that like the Ursa Major Stargate 323 Reverb? Greybox audio released code for this about a year ago: https://github.com/greyboxaudio/SG-323
pera · a month ago
Thanks for mentioning this project, I have been looking for a good reverb plugin for Linux for a while now and this sounds great.
gbraad · a month ago
There might be a plugin based on freeverb, which is also a good sounding one. I ohave it as a logue unit, so can't recommend one immediately. At least I know greybox based on actual device comparison, as he owns one and has been doing this for 5 years sans AI.
LatencyKills · a month ago
This is fantastic. I’m currently building a combustion engine simulator doing exactly what you did. In fact, I found a number of research papers, had Claude implement the included algorithms, and then incorporated them into the project.

What I have now is similar to https://youtu.be/nXrEX6j-Mws?si=XdPA48jymWcapQ-8 but I haven’t implemented a cohesive UI yet.

_o1yi · a month ago
Right on that's awesome! I think I'm doing more what you did vs. the other way around. Looks like you're pretty established. How long did it take to build your YouTube to what it is? What's that process been like?
dubeye · a month ago
awesome, in 2025 I made a few apps for my small business that I have spent hours trawling the web looking for, and I have little coding skills.

Sometimes it feels like I'm living in a different world, reading the scepticism on here about AI.

I'm sure there enterprise cases where it doesn't make sense, but for the your everyday business owner it's amazing what can be done.

maybe it's a failure of imagination but I can't imagine a world where this doesn't impact enterprise in short order

hatefulheart · a month ago
With all due respect you are living in a different world. Not in a bad way, it’s just you haven’t experienced what maintenance on a large complicated code base is like.
mirsadm · a month ago
The worst part of the new wave of vibe coders is their confidence.
kaffekaka · a month ago
Different worlds yes, but they both exist.
williamcotton · a month ago
Maybe the problem is large complicated codebases?
HPsquared · a month ago
I think there will be a transition period.
_o1yi · a month ago
No you're absolutely right. One of the things I'm starting to see and I wrote another Hacker News post about this is that more people are starting to come out talking about all the mistakes AI is making even as it gets better. Then You've got people like Karpathy talking about how drastic the landscape is shifting

I've been doing this for 25 years and I can tell you that the AI is a better coder than me, but I know how to use it. I reviewed the code that it puts out and it's better. I'm assuming the developers that are having a hard time with it are just not as experienced with it.

If you think your job is going to stay programmer, I just don't see it. I think you need to start providing value and using coding as just a means to do that, more so than coding being valuable in itself. It's just not as valuable anymore.

pengaru · a month ago
> you're absolutely right.

Deleted Comment

drcongo · a month ago
Cmajor, for anyone wondering: https://github.com/cmajor-lang/cmajor
reactordev · a month ago
All I’ve ever known was JUCE. This looks nice!

*edit* well duh, it’s the same guy!

_o1yi · a month ago
It's incredible