Readit News logoReadit News
6keZbCECT2uB commented on Going faster than memcpy   squadrick.dev/journal/goi... · Posted by u/snihalani
waschl · a month ago
Thought about zero-copy IPC recently. In order to avoid memcopy for the complete chain, I guess it would be best if the sender allocates its payload directly on the shared memory when it’s created. Is this a standard thing in such optimized IPC and which libraries offer this?
6keZbCECT2uB · a month ago
I've been meaning to look at Iceoryx as a way to wrap this.

Pytorch multiprocessing queues work this way, but it is hard for the sender to ensure the data is already in shared memory, so it often has a copy. It is also common for buffers to not be reused, so that can end up a bottleneck, but it can, in principle, be limited by the rate of sending fds.

6keZbCECT2uB commented on Go 1.24's go tool is one of the best additions to the ecosystem in years   jvt.me/posts/2025/01/27/g... · Posted by u/keybits
munificent · 7 months ago
I think the problem is basically because the build system has to be implemented using some ecosystem, and no other ecosystem wants to depend on that one.

If your "one build system to rule them all" was built in, say, Ruby, the Python ecosystem won't want to use it. No Python evangelist wants to tell users that step 1 of getting up and running with Python is "Install Ruby".

So you tend to get a lot of wheel reinvention across ecosystems.

I don't necessarily think it's a bad thing. Yes, it's a lot of redundant work. But it's also an opportunity to shed historical baggage and learn from previous mistakes. Compare, for example, how beloved Rust's cargo ecosystem is compared the ongoing mess that is package management in Python.

A fresh start can be valuable, and not having a monoculture can be helpful for rapid evolution.

6keZbCECT2uB · 7 months ago
Partly in jest, you can often find a Perl / bash available where you can't find a Python, Ruby, or Cargo.
6keZbCECT2uB commented on Go 1.24's go tool is one of the best additions to the ecosystem in years   jvt.me/posts/2025/01/27/g... · Posted by u/keybits
sunshowers · 7 months ago
> You really don't want to have to deal with multiple versions of the same library with compiled languages, but you have to with JavaScript.

Rust handles this fine by unifying up to semver compatibility -- diamond dependency hell is an artifact of the lack of namespacing in many older languages.

6keZbCECT2uB · 7 months ago
Conda unifies by using a sat solver to find versions of software which are mutually compatible regardless of whether they agree on the meaning of semver. So, both approaches require unifying versions. Linking against C gets pretty broken without this.

The issue I was referring to is that in Javascript, you can write code which uses multiple versions of the same library which are mutually incompatible. Since they're mutually incompatible, no sat-solve or unifyer is going to help you. You must permit multiple versions of the same library in the same environment. So far, my approach of ignoring some Javascript libraries has worked for my backend development. :)

6keZbCECT2uB commented on Go 1.24's go tool is one of the best additions to the ecosystem in years   jvt.me/posts/2025/01/27/g... · Posted by u/keybits
bjackman · 7 months ago
I always think it's a shame that these features end up getting built into ecosystem-specific build tools. Why do we need separate build systems for every language? It seems entirely possible to have build system that can do all this stuff for every language at once.

From my experience at Google I _know_ this is possible in a Megamonorepo. I have briefly fiddled with Bazel and it seems there's quite a barrier to entry, I dunno if that's just lack of experience but it didn't quite seem ready for small projects.

Maybe Nix is the solution but that has barrier to entry more at the human level - it just seems like a Way of Life that you have to dive all the way into.

Nonetheless, maybe I should try diving into one or both of those tools at some point.

6keZbCECT2uB · 7 months ago
I agree. In my opinion, if you can keep the experience of Bazel limited to build targets, there is a low barrier to entry even if it is tedious. Major issues show up with Bazel once you start having to write rules, tool chains, or if your workspace file talks to the Internet.

I think you can fix these issues by using a package manager around Bazel. Conda is my preferred choice because it is in the top tier for adoption, cross platform support, and supported more locked down use cases like going through mirrors, not having root, not controlling file paths, etc. What Bazel gets from this is a generic solution for package management with better version solving for build rules, source dependencies and binary dependencies. By sourcing binary deps from conda forge, you get a midpoint between deep investment into Bazel and binaries with unknown provenance which allows you to incrementally move to source as appropriate.

Additional notes: some requirements limit utility and approach being partial support of a platform. If you require root on Linux, wsl on Windows, have frequent compilation breakage on darwin, or neglect Windows file paths, your cross platform support is partial in my book.

Use of Java for Bazel and Python for conda might be regrettable, but not bad enough to warrant moving down the list of adoption and in my experience there is vastly more Bazel out there than Buck or other competitors. Similarly, you want to see some adoption from Haskell, Rust, Julia, Golang, Python, C++, etc.

JavaScript is thorny. You really don't want to have to deal with multiple versions of the same library with compiled languages, but you have to with JavaScript. I haven't seen too much demand for JavaScript bindings to C++ wrappers around a Rust core that uses C core libraries, but I do see that for Python bindings.

6keZbCECT2uB commented on Programming with ChatGPT   henrikwarne.com/2024/08/2... · Posted by u/jandeboevrie
Tainnor · a year ago
I have my reservations about the quality of LLM generated code, but since I have neither studied ML in depth, nor compared different LLMs enough, I'll refrain from addressing that side of the debate - except maybe for noting that "I test the code" is not good enough for any serious project because we know that tests (manual or automated) can never prove the absence of bugs.

Instead, I offer another point of view: I don't want to use LLMs for coding because I like coding. Finding a good and elegant solution to a complex problem and then translating it into an executable by way of a precise specification is, to me, much more satisfying than prompt engineering my way around some LLM until it spits out a decent answer. I find doing code reviews to be an extremely draining activity and using an LLM would mean basically doing code reviews all the time.

Maybe that will mean that, at some point, I'll have to quit my profession because programming has been replaced by prompt engineering. I guess I'll find something else to do then.

(That doesn't mean that there aren't individual use cases where I have used ChatGPT - for example for writing simple bash scripts, given that nobody in their right mind really understands bash fully. But that's different from having my entire coding workflow based on an LLM.)

6keZbCECT2uB · a year ago
Most of my time coding is spent on none of: elegant solutions, complex problems, or precise specifications.

In my experience, LLMs are useful primarily as rubber ducks on complex problems and rarely useful as code generation for such.

Instead, I spend most of my time between the interesting work doing rote work which is preventing me from getting to the essential complexity, which is where LLM code gen does better. How do I generate a heat map in Python with a different color scheme? How do I parse some logs to understand our locking behavior? What flags do I pass to tshark to get my desired output?

So, I spend less time coding the above and more time coding how we should redo our data layout for more reuse.

6keZbCECT2uB commented on Aro – Zig's new C compiler   github.com/Vexu/arocc... · Posted by u/whatever3
Laremere · a year ago
translate-c is not required for compiling pure Zig code. However, the plan is to remove the "@cImport" built in. So if your project is importing a c header, in the future you'll add build step translating it from c to Zig, and then you import it into your Zig code as a module.
6keZbCECT2uB · a year ago
How does this work with things which are expressible only in C? For example, Pascal strings with flexible array members?

I guess since you said header, you keep everything opaque and create a header for that which gets translated to Zig.

6keZbCECT2uB commented on How to choose a textbook that is optimal for oneself?   matheducators.stackexchan... · Posted by u/JustinSkycak
Jeff_Brown · a year ago
Don't!

Faithfulness to a single source is the biggest reason I see for failure In students. Be promiscuous. If a page, chapter, or even whole book bores you, scan ahead, put it on trial for a bit, and if it doesn't redeem itself quickly, replace it. The same goes (to the extent possible) for courses, teachers and even whole media. Only once you've tried the whole universe do you have reason to lower your standards and try something again from that universe that didn't meet your earlier ones. A book isn't a friend. There are no brownie points for completion.

Also most subjects are like that too. If you really want to know a natural language and hate the verb rules, focus on the rest of the language. If you soak up the verbs more slowly you'll still be hnderstandable, and you'll have fun, and most importantly you won't give up.

And programming languages are especially like this. Don't like class methods? Good! They suck anyway. Keep your functions pure. Don't like generics? Well that's a shame but it didn't stop the first many generations of Go programmers who couldn't use them if they wanted to. Etc.

6keZbCECT2uB · a year ago
In all seriousness, this seems to carry risk of never doing anything deep or hard. In particular, I've been programming for long enough, that I can be casual about many programming languages until I hit something which is actually new, such as in Rust or Prolog.

Promiscuous doesn't have to mean having a low tolerance for difficulty, but everything else you wrote seems to support that. So, are you saying that enduring difficulty is unnecessary, or did you mean something different?

6keZbCECT2uB commented on C++ patterns for low-latency applications including high-frequency trading   arxiv.org/abs/2309.04259... · Posted by u/chris_overseas
sneilan1 · a year ago
>> In the get method, you're returning a pointer to the element within the queue after bumping the consumer position (which frees the slot for the producer), so it can get overwritten while the user is accessing it. And then your producer and consumer positions will most likely end up in the same cache line, leading to false sharing.

I did not realize this. Thank you so much for pointing this out. I'm going to take a look.

>> use std::atomic for your producer

Yes, it is hard to get these data structures right. I used Martin Fowler's description of the LMAX algorithm which did not mention atomic. https://martinfowler.com/articles/lmax.html I'll check out the paper.

6keZbCECT2uB · a year ago
Fowler's implementation is written in Java which has a different memory model from C++. To see another example of Java memory model vs a different language, Jon Gjengset ports ConcurrentHashMap to Rust
6keZbCECT2uB commented on Leantime: Open-Source Jira Alternative   github.com/Leantime/leant... · Posted by u/intheleantime
intheleantime · 2 years ago
Thanks for the feedback. This may have not come across clearly enough on the website. The AI prioritization is an optional mechanism (toggle) to help individual users prioritize the tasks that have already been assigned to them using signals like task sentiment (how do you feel about a task), priority etc.

The priority field is not changed as part of that function. I agree that AI cannot be the primary prioritization mechanism. All it can do is augment existing processes and help individuals be more effective.

6keZbCECT2uB · 2 years ago
How does the tool ingest task sentiment? As a developer, I would never put in writing that I'm less than enthusiastic about any task.
6keZbCECT2uB commented on Clang-expand: Expand function invocations into current scope   github.com/goldsborough/c... · Posted by u/6keZbCECT2uB
gpderetta · 2 years ago
Very very nice. But it should really be an action in clangd instead of its own tool, for simple integration with LSP.

Also instead of always replacing the text, it could also be an overlay, where the function call is temporarily expanded in your IDE while in some special mode.

6keZbCECT2uB · 2 years ago
The choice between replacing and making it an overlay is up to your editor. I think it would be pretty to handle either choice as a plugin in your editor given the returned json.

I was surprised it wasn't combined with clangd.

u/6keZbCECT2uB

KarmaCake day142October 17, 2018View Original