D is a awesome language to work with, it's got many useful language features that make the activity of code writing a pleasure. - I hope this criticism is taking constructively.
However not a beat a dead horse, but if you want to process more than a trickle of data in it you run into problems with the GC really quickly. I really feel the language would be better off without the GC.
These are not the same issues address by the JSON compiler post or w/e that surfaces a couple months ago.
From what I can tell, theres a global lock around everything in the GC, including allocations. In a multi-core world, this just simply doesn't work and it one of the major pain point of the language. I write data-intensive processing on high core count machines (32), and have had to resort to 0-allocation strategies, or in map-reduce contexts actually sharding at the process level, writing the results to disk, and then running a reducer process over the results.
You can write performant D code, but you give up large amounts of code safety. It's essentially just whatever you'd write in C++ without the ownership semantics it gives you.
You can't have a core part of your language being an essentially unavoidable massive point of contention.
I've literally seen 20x or more speedups in multithreaded cases just by making sure I reuse every buffer rather than create new ones.
I feel this is really holding back an otherwise great language to work in.
This is discussed in the reddit thread in more detail.
Thanks for responding. I've seen this code used an an example before actually.
Not that buffer reuse/avoiding heap allocations was unknown to me, it was just surprising to see this in an application that spent most of it's time waiting on the network.
I will say as a positive point that the equivalent D code to a C++ implementation was much cleaner due to build in array slicing among other things.
There are other areas where heap allocations are not so avoidable however (some hashmaps, some std's). My main point is that like at all languages, heap allocations are slow, but here they bear an unnecessarily large contention factor.
I really like D so I hope this is helpful feedback.
"Reuse rather than free and reallocate is a core practice whenever you feel the need for speed"
The problem with this conventional wisdom is that it defeats other optimisations. If you free, use, and deallocate a block of memory within a compilation unit, a compiler can transparently allocate it on the stack, or even keep it in registers
If you use a free-list the objects are always escaped, and the same optimisations can't be applied to them.
This is particularly true for small temporary objects, like an intermediate vector (as in coordinates) object.
I got a speedup of around 13,32 on a 16 core Opteron machine a few years ago, using a simple parallel foreach on a N Body problem simulator. I didn't do anything special for it.
I thought using SIMD to accelerate it more on the integrator and acceleration calculation, inside of the parallel for.
Unless you are creating or destroying bodies then your n-body implementation won't need to allocate any memory will it? So is it relevant to what he's talking about?
Algorithms, by and large, do not need to dynamically allocate memory. By converting more of Phobos to being algorithms, they become agnostic to whatever allocation method is used. Allocation strategy becomes a high level decision, rather than a low level one.
D is a fantastic language. C++ is so difficult to unseat as a systems language now since it is back as a moving target which is both evolving and has extremely robust tools to go with it. Challenging it on the language level is not enough.
Yeah. C++ has so good tooling that despite what new language I try, I always hit some implementation, ecosystem or tooling issue that always leads me to conclude I would have been more productive with C++. Which sucks monkeyballs as aesthetics go.
The salient point is that conditional compilation in D is not text based, it is AST based. Furthermore, for 'static if', the conditional expression has access to the full symbol table and power of the D language - it is not a separate preprocessor language with its own rules and separate symbol table.
version (Posix) {
version(OSX) {
// Weird stuff on OSX
}
// Sane stuff for Linux or FreeBSD
}
version (Windows) {
// Weird stuff for Windows
}
But think that this is processed by the compiler, not by a text pre-processor. Also, you can use your own version stuff, for example here : https://github.com/Zardoz89/nBodySim/blob/master/source/app.... there is a "PFor" version to allow to compile a version of the program that uses a parallel foreach (an N Body simulator)
The constructs are simply built into the language.
This has considerable benefits: you can use e.g. `static if` to enable/disable code based on conditions which refer to constants declared in the program (there is no longer a separate namespace for preprocessor defines and program constants), and with CTFE, you can use any expression that can be evaluated during compilation.
I'm curious how many people out there on HN actully use D and what for? I don't typically see D based projects here on HN so it would be interesting to find out about any use cases where D was found a perfect fit.
Here's a bit of code from a personal shopping comparison program I wrote a few days ago, which combines some standard library constructs with my own libraries:
I know your projects! You made RABCDasm, which is used extensively on a certain game I used to play around in, people are still using it. Hah, I mostly asked this question because I was reminded of you, you're one of the first developers I saw who does most of their work in D.
Thanks for all your hard work, there's people out there who really found RABCDAsm really helpful, and it inspired a lot of fun software in the process.
D fits lots of purposes (more than C++), but I'm only going to talk about scripting.
The minimalistic approach for the C++ standard library makes C++ inappropriate to replace, for example, Bash scripts. In C++, you can't do any operation on the file system (erase a directory, get file size, list files, create pipes, ..) in a portable way. You can't create processes in a portable way.
The D standard library (aka Phobos) doesn't suffer these limitations. Which means you can write very high-level code using only a few (readable) lines.
By adding a shebang "#!/usr/bin/env rdmd" at the beginning of your D programs, you can execute them without prior explicit compilation, like: ./myProgram.d
I've been porting all my Bash scripts to D. The code runs incredibly faster, and now I can have structs again!
What I'd particularly like to see is an analysis of D vs. Rust. Both have been touted as systems programming languages to replace C/C++, and I'd be curious to see a breakdown of their strengths and weaknesses compared to each other.
It would take an expert in both languages to do it. My knowledge of Rust is fairly limited. In particular, every language has its own idiomatic way of doing things (the D Way, the Rust Way), and to compare one has to be cognizant of those idioms.
I did some hobby stuff with it. But I'm afraid that I didn't use it for real stuff. A few weeks ago I try to use it to write a tool to update our Java webservices from the latest Maven deploy (I'm afraid that I work as Java programmer :( ). Sadly I got issues with linking against std.net.curl , and I end writing a bash script for that.
I really enjoy a lot more writing on D, but I keep finding the issue that I need to interface with C++ libs for something (for example Qt 5 for an emulator GUI) or weird errors like these problem with std.net.curl on Ubuntu 14.04 . If not was for that, I would wrote the Trillek virtual computer on D.
> D has a package manager. C++ has none that is popular in its community. Therefore, using a third-party library is many times easier.
This comes up all the time in reference to C++. C++ doesn't need a package manager because it has excellent support from distro package managers. Most libraries are in the system package manager. And yes, sometimes you need a newer version of a package or one not quite widely-used enough to be in the system package manager, in which case you need to do compile it yourself or install it manually. But that is a problem common to all package managers and there is nothing about D which will make it any easier.
The operating system package manager is supposed to provide packages for libraries that are required to run the rest of the operating system. Not libraries you need for development.
Package managers for development allow things such as installing multiple versions of a library, creating "sandboxes" with specific versions of libraries, and sometimes they are also the de-facto build system for a language.
The OS package manager isn't doing the same job and has very different requirements. Not all libraries (especially ones that are usually built statically or header-only) end up in an OS package manager. Typically OS package managers don't bundle any libraries that are not required by one or more of applications in the package repository.
So no, an OS package manager doesn't cut it for C or C++ development. They don't ship all the libraries and they can't do all the things you need for serious development.
As a C and C++ developer, this pisses me off constantly. Even though most libraries are easy enough to fetch these days with Git, there's no standard way of configuring, building and installing them. And no OS package manager has ever been able to provide all the libraries I need.
(Note: my frame of reference is experience with UNIX-like systems such as Linux and the BSDs, and even MacPorts on OS X.)
The operating system package manager is supposed to
provide packages for libraries that are required to
run the rest of the operating system. Not libraries
you need for development.
Distro repos do provide many of the packages you need for development because you need to be able to compile all of the C++ applications and libraries in the distro repo, of which there are many.
Package managers for development allow things such as
installing multiple versions of a library, creating
"sandboxes" with specific version
Distro repos often include multiple versions of libraries because not all of the crucial applications and libraries in the repo are upgraded at the same speed.
The OS package manager isn't doing the same job and has very
different requirements. Not all libraries (especially ones that
are usually built statically or header-only) end up in an OS
package manager. Typically OS package managers don't bundle
any libraries that are not required by one or more of
applications in the package repository.
Fair enough, and for this reason a language-specific PM will most likely have more libraries available, so I concede that point. However, I seriously doubt that a single group of people could be better at providing packages for all the major combinations of kernel, distro, and CPU architecture than the collection of distro groups. Just imagine how difficult that would be!
C++ is different because it is very intertwined with the system. For D it makes sense at this point because it doesn't have the existing distro PM support that C++ does. Once crucial parts of the system starts depending on D the situation might change.
Almost every C++ codebase out there vendor all dependencies. Looks like they are not using the system package manager. Windows has no package manager with such library releases.
Language package managers have problems, but they do make the work easier.
Pip has (had) a number of issues (mostly related to pypi/availability) -- but pip+virtualenvs is something I use all the time, and am very happy with.
The ease with which one can simply:
virtualenv some_experiment # --no-site-packages
# is now standard! yay!
./some_experiment/bin/pip install -U pip distribute
# sadly needed -- but gets us pip list --outated, which
# is very helpful
./some_experiment/bin/pip install <thing-to-test>
And then just work with that, without even having to mess around with sourcing the "activate" script etc for most packages is great. No clutter under /usr/local, no broken system tools due to some dependency being pulled in etc.
> Almost every C++ codebase out there vendor all dependencies. Looks like they are not using the system package manager.
Yeah, e.g. Chromium. But that was also the reason why Fedora refused to provide packages for Chromium. Distros don't like projects that include their own versions of libraries because it goes against how things are supposed to work (maintenance and, by implication, a security nightmare). I know that there are many C++ projects in the various distro package repos, and no way do 'most of them' include their own copies of dependencies.
Speaking of package managers, looks like the ssl-cert for dlang.org is expired (and has been for more than a year) -- which leaves most of us with one less way to try to verify the installer, as: https://dlang.org/gpg_keys.html is only available via a untrusted channels.
D looks like a really interesting language, I had never checked it out prior to reading this.
unittest blocks look like they'd be super helpful. And it has modules, which I consider a must-have.
In what areas would D excel? What are its downsides? What did you like and dislike the most? I'd be really interested in hearing about people's experiences.
As a D programmer currently making a small game engine + networked rts game in the language, invariant blocks are another thing I really enjoy about the language: http://dlang.org/contracts.html#Invariants
Lets you specify contracts which are asserted on construction and destruction, it's great.
The meta programming is also wonderful for things like serialization of data, simply iterating over a type's members at compile-time to generate serialization code, cutting out much bloated code. It's worth noting that it's metaprogramming for humans, _compared_ to c++, but sometimes leaves a bit to be desired documentation-wise, all the building blocks are there, but sometimes you'd like to not need to reinvent the wheel every time you attempt some metaprogramming-related task.
Things I don't enjoy include the core language's reliance on the GC, including things like exceptions relying on GC allocations.
Thankfully they've made it a bit easier to track down things which may allocate gc memory lately with accompanying flags to the compiler, which prints a trace of where it may happen.
I'm always on the look out for improvements to C++, and it's been a while since I looked at D... Hope you don't mind if I ask you a couple questions.
Can you still turn off the GC? Lets say I don't like exceptions, and that I don't mind implementing library code on my own, would turning GC off impact anything else?
How is the story for deploying executables to mac/win/lin? Is it as easy as C/C++?
Also, do you know if anybody's shipped high quality games with it before? It's been a while now, I'd hope so, but I haven't heard of any. Pretty sure no AAA games have used it but maybe some indie?
A bit of an overlooked area where D excels are its support for transitive immutability and function purity. We're discovering more and more things that are now practical because of this.
I was working on a similar article just a couple of weeks ago for the D wiki, but I wanted to focus on the really big things, and I want to include more examples:
Yeah, my article is a bit of a quickie and not very good, I didn't expect people would post them here and on reddit. Moreover, it isn't a big picture post.
D looks great, but I feel like depending on a GC is kind of a deal-breaker. If that is accurate, then D fills the same niches as Google's Go rather than C++'s. Rust looks more like a proper replacement for C++.
However not a beat a dead horse, but if you want to process more than a trickle of data in it you run into problems with the GC really quickly. I really feel the language would be better off without the GC.
These are not the same issues address by the JSON compiler post or w/e that surfaces a couple months ago.
From what I can tell, theres a global lock around everything in the GC, including allocations. In a multi-core world, this just simply doesn't work and it one of the major pain point of the language. I write data-intensive processing on high core count machines (32), and have had to resort to 0-allocation strategies, or in map-reduce contexts actually sharding at the process level, writing the results to disk, and then running a reducer process over the results.
You can write performant D code, but you give up large amounts of code safety. It's essentially just whatever you'd write in C++ without the ownership semantics it gives you.
You can't have a core part of your language being an essentially unavoidable massive point of contention.
I've literally seen 20x or more speedups in multithreaded cases just by making sure I reuse every buffer rather than create new ones.
I feel this is really holding back an otherwise great language to work in.
This is discussed in the reddit thread in more detail.
Reuse rather than free and reallocate is a core practice whenever you feel the need for speed, regardless of the memory allocate strategy used.
For some very fast D code:
https://github.com/facebook/warp
Minimizing the amount of heap memory allocated is a core strategy.
Not that buffer reuse/avoiding heap allocations was unknown to me, it was just surprising to see this in an application that spent most of it's time waiting on the network.
I will say as a positive point that the equivalent D code to a C++ implementation was much cleaner due to build in array slicing among other things.
There are other areas where heap allocations are not so avoidable however (some hashmaps, some std's). My main point is that like at all languages, heap allocations are slow, but here they bear an unnecessarily large contention factor.
I really like D so I hope this is helpful feedback.
The problem with this conventional wisdom is that it defeats other optimisations. If you free, use, and deallocate a block of memory within a compilation unit, a compiler can transparently allocate it on the stack, or even keep it in registers
If you use a free-list the objects are always escaped, and the same optimisations can't be applied to them.
This is particularly true for small temporary objects, like an intermediate vector (as in coordinates) object.
I thought using SIMD to accelerate it more on the integrator and acceleration calculation, inside of the parallel for.
I just discovered that the language authors are converting the std library (phobos) to be "no GC".
Algorithms, by and large, do not need to dynamically allocate memory. By converting more of Phobos to being algorithms, they become agnostic to whatever allocation method is used. Allocation strategy becomes a high level decision, rather than a low level one.
D with an Hotspot comparable GC quality would be quite good.
How does this work, specifically conditional compilation? I have lots of code like this:
Where the OS X way calls functions that do not exist on Linux, and vice-versa. How does D handle this?http://dlang.org/version.html
The salient point is that conditional compilation in D is not text based, it is AST based. Furthermore, for 'static if', the conditional expression has access to the full symbol table and power of the D language - it is not a separate preprocessor language with its own rules and separate symbol table.
http://dlang.org/version.html
The constructs are simply built into the language.
This has considerable benefits: you can use e.g. `static if` to enable/disable code based on conditions which refer to constants declared in the program (there is no longer a separate namespace for preprocessor defines and program constants), and with CTFE, you can use any expression that can be evaluated during compilation.
http://dlang.org/version.html
Here's a bit of code from a personal shopping comparison program I wrote a few days ago, which combines some standard library constructs with my own libraries:
Thanks for all your hard work, there's people out there who really found RABCDAsm really helpful, and it inspired a lot of fun software in the process.
D fits lots of purposes (more than C++), but I'm only going to talk about scripting.
The minimalistic approach for the C++ standard library makes C++ inappropriate to replace, for example, Bash scripts. In C++, you can't do any operation on the file system (erase a directory, get file size, list files, create pipes, ..) in a portable way. You can't create processes in a portable way.
The D standard library (aka Phobos) doesn't suffer these limitations. Which means you can write very high-level code using only a few (readable) lines. By adding a shebang "#!/usr/bin/env rdmd" at the beginning of your D programs, you can execute them without prior explicit compilation, like: ./myProgram.d
I've been porting all my Bash scripts to D. The code runs incredibly faster, and now I can have structs again!
http://tech.adroll.com/blog/data/2014/11/17/d-is-for-data-sc...
[1]https://github.com/facebook/warp
Ironically, the dmd compiler is written in C++.
I really enjoy a lot more writing on D, but I keep finding the issue that I need to interface with C++ libs for something (for example Qt 5 for an emulator GUI) or weird errors like these problem with std.net.curl on Ubuntu 14.04 . If not was for that, I would wrote the Trillek virtual computer on D.
http://wiki.dlang.org/GUI_Libraries
Just thought I'd share, maybe you overlooked it and this might be helpful for future projects? Good luck!
This comes up all the time in reference to C++. C++ doesn't need a package manager because it has excellent support from distro package managers. Most libraries are in the system package manager. And yes, sometimes you need a newer version of a package or one not quite widely-used enough to be in the system package manager, in which case you need to do compile it yourself or install it manually. But that is a problem common to all package managers and there is nothing about D which will make it any easier.
Package managers for development allow things such as installing multiple versions of a library, creating "sandboxes" with specific versions of libraries, and sometimes they are also the de-facto build system for a language.
The OS package manager isn't doing the same job and has very different requirements. Not all libraries (especially ones that are usually built statically or header-only) end up in an OS package manager. Typically OS package managers don't bundle any libraries that are not required by one or more of applications in the package repository.
So no, an OS package manager doesn't cut it for C or C++ development. They don't ship all the libraries and they can't do all the things you need for serious development.
As a C and C++ developer, this pisses me off constantly. Even though most libraries are easy enough to fetch these days with Git, there's no standard way of configuring, building and installing them. And no OS package manager has ever been able to provide all the libraries I need.
C++ is different because it is very intertwined with the system. For D it makes sense at this point because it doesn't have the existing distro PM support that C++ does. Once crucial parts of the system starts depending on D the situation might change.
Anyone here unhappy with pip?
> Anyone here unhappy with pip?
Pip has (had) a number of issues (mostly related to pypi/availability) -- but pip+virtualenvs is something I use all the time, and am very happy with.
The ease with which one can simply:
And then just work with that, without even having to mess around with sourcing the "activate" script etc for most packages is great. No clutter under /usr/local, no broken system tools due to some dependency being pulled in etc.Yeah, e.g. Chromium. But that was also the reason why Fedora refused to provide packages for Chromium. Distros don't like projects that include their own versions of libraries because it goes against how things are supposed to work (maintenance and, by implication, a security nightmare). I know that there are many C++ projects in the various distro package repos, and no way do 'most of them' include their own copies of dependencies.
In what areas would D excel? What are its downsides? What did you like and dislike the most? I'd be really interested in hearing about people's experiences.
Lets you specify contracts which are asserted on construction and destruction, it's great.
The meta programming is also wonderful for things like serialization of data, simply iterating over a type's members at compile-time to generate serialization code, cutting out much bloated code. It's worth noting that it's metaprogramming for humans, _compared_ to c++, but sometimes leaves a bit to be desired documentation-wise, all the building blocks are there, but sometimes you'd like to not need to reinvent the wheel every time you attempt some metaprogramming-related task.
Things I don't enjoy include the core language's reliance on the GC, including things like exceptions relying on GC allocations.
Thankfully they've made it a bit easier to track down things which may allocate gc memory lately with accompanying flags to the compiler, which prints a trace of where it may happen.
Can you still turn off the GC? Lets say I don't like exceptions, and that I don't mind implementing library code on my own, would turning GC off impact anything else?
How is the story for deploying executables to mac/win/lin? Is it as easy as C/C++?
Also, do you know if anybody's shipped high quality games with it before? It's been a while now, I'd hope so, but I haven't heard of any. Pretty sure no AAA games have used it but maybe some indie?
Making games? :) I'm half joking here, Kenta Cho has made tons of shoot'em up games, mostly written in D (and released as Open Source) for years:
https://en.wikipedia.org/wiki/ABA_Games
http://wiki.dlang.org/Coming_From/C_Plus_Plus_WIP_article