Hi Hacker News,
This is definitely out of my comfort zone. I've never programmed DSP before. But I was able to use Claude code and have it help me build this using CMajor.
I just wanted to show you guys because I'm super proud of it. It's a 100% faithful recreation based off of the schematics, patents, and ROMs that were found online.
So please watch the video and tell me what you think
The reason why I think this is relevant is because I've been a programmer for 25 years and AI scares the shit out of me.
I'm not a programmer anymore. I'm something else now. I don't know what it is but it's multi-disciplinary, and it doesn't involve writing code myself--for better or worse!
Thanks!
There was something exciting about sleuthing out how those old machines worked: we used a black box approach, sending in test samples, recording the output, and comparing against the digital algorithm’s output. Trial and error, slowly building a sense of what sort of filter or harmonics could bend a waveform one way or another.
I feel like some of this is going to be lost to prompting, the same way hand-tool woodworking has been lost to power tools.
While there is something lost in prompting, people will always seek out first-principles so they can understand what they are commanding and controlling, especially as old machines become new machines with new capabilities not even imaginable before due to the old software complexity wall.
But smart programmers will realize the world doesn't care about any of that at all.
Deleted Comment
Though using AI to build the devtools we used for signal analysis would have been helpful.
> I'm not a programmer anymore. I'm something else now. I don't know what it is but it's multi-disciplinary, and it doesn't involve writing code myself--for better or worse!
Yes, I agree. I think the role of software developer is going to evolve into much more of an administrative, managerial role, dealing more with working with whatever organisation you're in than actually typing code. Honestly I think it probably was always heading in this direction but it's definitely quite a step change. Wrote about it a little incoherently on my blog just this morning: https://redfloatplane.lol/blog/11-2025-the-year-i-didnt-writ...
AI is a force multiplier - it makes bad worse, it _can_ make good better. You need even more engineering disciplines than before to make sure it's the latter and not the former. Even with chaining code quality MCP's and a whole bunch of instructions in AGENTS.md, there's often a need to intervene and course adjust, because AI can either ignore AGENTS.md, or because whatever can pass code quality checks does not always mean the architecture is something that's solid.
That being said, I do agree our job is changing from merely writing code, to more of a managerial title, like you've said. But, there's a new limit - your ability to review the output, and you most definitely should review the output if you care about long-term sustainable, quality software.
but AI being solely a force multiplier is not accurate, it is a intelligence multiplier. There are significantly better ways now to apply skills and taste with less worry about technical debt. AI coding agents have gotten to the point that it virtually removes ALL effort barrierrs even paying off technical debt.
While it is still important to pay attention to the direction your code is being generated, the old fears and caution we attributed to previous iteration of AI codegen is largely being eroded and this trend will continue to the point where our "specialty" will no longer matter.
I'm already seeing small businesses that laid off their teams and the business owner is generating code themselves. The ability to defend the thinning moat of not only software but virtually all white collar jobs is getting tougher.
If software becomes cheaper to make it amortizes at a higher rate, ie, it becomes less valuable at a faster clip. This means more ephemeral software with a shorter shelf-life. What exactly is wrong with a world where software is borderline disposable?
I’ve been using Photoshop since the 90s and without having watched the features expand over the years I don’t think I would find the tool useful for someone without a lot of experience.
This being said, short-lived and highly targeted, less feature-full software for image creation and manipulation catered to the individual and specific to an immediate task seems advantageous.
Dynamism applied not to the code but to the products themselves.
Or something like that.
I'll update the post to reflect the reality, thanks for calling it out.
I completely agree with your comment. I think the ability to review code, architecture, abstractions matters more than the actual writing of the code - in fact this has really always been the case, it's just clearer now that everyone has a lackey to do the typing for them.
Deleted Comment
If you look at my other comment in this thread, my project is about designing proprioceptive touch sensors (robot skin) using a soft-body simulator largely built with the help of an AI. At this stage, absolute physical accuracy isn’t really the point. By design, the system already includes a neural model in the loop (via EIT), so the notion of "accuracy" is ultimately evaluated through that learned representation rather than against raw physical equations alone.
What I need instead is a model that is faithful to my constraints: very cheap, easily accessible materials, with properties that are usually considered undesirable for sensing: instability, high hysteresis, low gauge factor. My bet is that these constraints can be compensated for by a more circular system design, where the geometry of the sensor is optimized to work with them.
Bridging the gap to reality is intentionally simple: 3D-print whatever geometry the simulator converges to, run the same strain/stress tests on the physical samples, and use that data to fine-tune the sensor model.
Since everything is ultimately interpreted through a neural network, some physical imprecision upstream may actually be acceptable, or even beneficial, if it makes the eventual transfer and fine-tuning on real-world data easier.
And I'm not doing it based off of my ears. I know the algorithm, have the exact coefficients, and there was no guesswork except for the potentiometer curves and parts of the room algorithm that I'm still working out, which is a completely separate component of the reverb.
But when I put it up for sale, I'll make sure to go into detail about all that so people who buy it know what they're getting.
Consider reaching out to Audiority - I know they have some virtual recreations of Space Station hardware.
https://www.audiority.com/shop/space-station-um282
I tried this for laughs with Gemini 3 Pro. It spit out the same ZDF implementation that is on countless GitHub repos, originating from the 2nd Pirkle FX book (2019).
Dead Comment
Dead Comment
The video claims: "It utilizes the actual DSP characteristics of the original to bring that specific sound back to life." The author admits they have never programmed DSP. So how are they verifying this claim?
I had a small epiphany a couple of weeks ago while thinking about robot skin design: using conductive 3D-printed structures whose electrical properties change under strain, combined with electrical impulses, a handful of electrodes, a machine-learning model to interpret the measurements, and computational design to optimize the printed geometry.
While digging into the literature, I realized that what I was trying to do already has a name: proprioception via electrical impedance tomography. It turns out the field is very active right now.
https://www.cam.ac.uk/stories/robotic-skin
That realization led me to build a Bergström–Boyce nonlinear viscoelastic parallel rheological simulator using Taichi. This is far outside my comfort zone. I’m just a regular programmer with no formal background in physics (apart from some past exposure to Newton-Raphson).
Interestingly, my main contribution hasn’t been the math. It’s been providing basic, common-sense guidance to my LLM. For example, I had to explicitly tell it which parameters were fixed by experimental data and which ones were meant to be inferred. In another case, the agent assumed that all the red curves in the paper I'm working with referred to the same sample, when they actually correspond to different conducting NinjaFlex specimens under strain.
Correcting those kinds of assumptions, rather than fixing equations, was what allowed me to reproduce the results I was seeking. I now have an analytical, physics-grounded model that fits the published data. Mullins effect: modeled. Next up: creep.
We’ll see how far this goes. I’ll probably never produce anything publishable, patentable, or industrial-grade. But I might end up building a very cheap (and hopefully not that inaccurate), printable proprioceptive sensor, with a structure optimized so it can be interpreted by much smaller neural networks than those used in the Cambridge paper.
If that works, the gesture will have been worth it.
What I have now is similar to https://youtu.be/nXrEX6j-Mws?si=XdPA48jymWcapQ-8 but I haven’t implemented a cohesive UI yet.
Sometimes it feels like I'm living in a different world, reading the scepticism on here about AI.
I'm sure there enterprise cases where it doesn't make sense, but for the your everyday business owner it's amazing what can be done.
maybe it's a failure of imagination but I can't imagine a world where this doesn't impact enterprise in short order
I've been doing this for 25 years and I can tell you that the AI is a better coder than me, but I know how to use it. I reviewed the code that it puts out and it's better. I'm assuming the developers that are having a hard time with it are just not as experienced with it.
If you think your job is going to stay programmer, I just don't see it. I think you need to start providing value and using coding as just a means to do that, more so than coding being valuable in itself. It's just not as valuable anymore.
Deleted Comment
*edit* well duh, it’s the same guy!