I think it's pretty clear that Wittgenstein's truth tables are those that guided the development of computer science.
>In a manuscript of 1893, in the context of his study of the truth-functional analysis of propositions and proofs and his continuing efforts at defining and understanding the nature of logical inference, and against the background of his mathematical work in matrix theory in algebra, Charles Peirce presented a truth table which displayed in matrix form the definition of his most fundamental connective, that of illation, which is equivalent to the truth-functional definition of material implication. Peirce’s matrix is exactly equivalent to that for material implication discovered by Shosky that is attributable to Bertrand Russell and has been dated as originating in 1912. Thus, Peirce’s table of 1893 may be considered to be the earliest known instance of a truth table device in the familiar form which is attributable to an identifiable author, and antedates not only the tables of Post, Wittgenstein, and Łukasiewicz of 1920-22, but Russell’s table of 1912 and also Peirce’s previously identified tables for trivalent logic tracable to 1902.
PDF of Anellis's paper: https://arxiv.org/ftp/arxiv/papers/1108/1108.2429.pdf
>But even if that conclusion is challenged, it is now clear that Russell understood and used the truth-table technique and the truth-table device. By 1910, Russell had already demonstrated a well-documented understanding of the truth-table technique in his work on Principia Mathematica. Now, it would seem that by 1912, and surely by 1914, Russell understood, and used, the truth-table device. Of course, the combination of logical conception and logical engineering by Russell in his use of truth tables is the culmination of work by Boole and Frege, who were closely studied by Russell. Wittgenstein and Post still deserve recognition for realizing the value and power of the truth-table device. But Russell also deserves some recognition on this topic, as part of this pantheon of logicians.
>In this paper I have shown that neither the truth-table technique nor the truth-table device was "invented" by Wittgenstein or Post in 1921-22. The truth-table technique may originally be a product of Philo's mind, but it was clearly in use by Boole, Frege, and Whitehead and Russell. The truth-table device is found in use by Wittgenstein in 1912, perhaps with some collaboration from Russell. Russell used the truth-table technique at Harvard in 1914 and in London in 1918. So the truth-table technique and the truth-table device both predate the early 1920s.
PDF of Shosky's relevant paper: https://mulpress.mcmaster.ca/russelljournal/article/download...
That you would deride Wittgenstein on a math/CS forum, when he is literally the person who thought up the concept of truth tables, seems quite egregious.
Yes, Wittgenstein is one of the most frustrating philosophers to read (I know, I took a class on his work), but his impact on the development of computer science, as one of the main people trying to harness the logic of thought/language, seems obvious to me.
That would be Charles Peirce, in the XIXth century, not Wittgenstein.
From a historical point of view, it is the other way around. Germans, French and the English always had a hard on for the slavs. They desperately wanted those resource rich, strategically located lands. They still do.
> The Uprising started when the Red Army appeared on the city's doorstep, and the Poles in Warsaw were counting on Soviet front capturing or forwarding beyond the city in a matter of days. This basic scenario of an uprising against the Germans, launched a few days before the arrival of Allied forces, played out successfully in a number of European capitals, such as Paris and Prague. However, despite easy capture of area south-east of Warsaw barely 10 kilometres (6.2 miles) from the city centre and holding these positions for about 40 days, the Soviets did not extend any effective aid to the resistance within Warsaw.
https://en.wikipedia.org/wiki/Warsaw_Uprising#Soviet_stance
I think you confused saving with conquering.
I have sat through many a French meals.
A decade ago, that was a lot of RAM for a desktop environment. In the late '00s, I remember Ubuntu with Gnome 2 using around ~128 MB of RAM right after boot.
What happened? Most DEs aren't that much more complicated than they were a decade+ ago. Is it the array of supporting libraries (Qt and Gtk) that get loaded into memory? I could see that being a problem since even the "lightweight" DEs like XFCE and LXQt rely on them heavily.
I have 20+ years of experience writing C++.
Yes, I've looked at "core" C++ headers and source. The most annoying part to me is style (mixed tabs and spaces, curly braces at wrong places, parenthesis or not, the usual style complaints). But other than that they're very readable to a seasoned C++ engineer.
I've also tried to understand symbols. You're right, they're difficult. But there's also tooling available to do it automatically. Even if you don't want to use the tools, there is a method to the madness and it's documented...
Let me ask ChatGPT:
> What tool lets me translate an exported symbol name to a C++ name?
C++filt
It's categorized as a demangler. That's your search term to look for (I had to remember what it was).Then I asked:
> Is there a function in the standard library which allows to mangle or demangle a given name or symbol?
It tells about `__cxa_demangle` for GCC. While I had forgotten about that, I'm pretty sure there is (or perhaps something similar) in the standard library.
It also suggests to use a library such as `abi::__cxa_demangle`. Hah, that's what I was looking for. It's an implementation-specific API (eg, compiler-specific) API used as an example. It was mentioned on `std::type_info::name()` page here:
https://en.cppreference.com/w/cpp/types/type_info/name
So, to continue replying to you: yes, it's annoying but it's solvable with tools that you can absolutely integrate into your IDE or command-line workflow.
> - Standarization that feels like pulling more Boost into the language, which means more templates.
The boost libraries are open source and their mailing lists are active. If you don't like a given library because it has too many templates then you could make one with fewer templates.
And, as standardization goes, it's also quite open source. The C++ committee is very open and receptive to improvements. The committee are volunteers (so their time is limited) and (usually) have their own improvements to the standard that they want. So you have to drive the changes you want (eg, actively seek feedback and engagement).
> P.S. Example from recent core file, one line in the stack trace:
I've seen much longer -- I've seen templates entirely fill a terminal buffer for a single line. That's extremely rare, definitely not fun, and debuggability is absolutely a valid reason to refactor the application design (or contribute library changes).
I find it useful to copy the template vomit into a temporary file and then run a formatter (eg clang-format), or search/replace `s/(<\({[)/\1\n/g` and manually indent. Then the compiled type is easier to read.
Some debuggers also understand type aliases. They'll replace the aliased type with the name you actually used, and then separately emit a message (eg, on another line) indicating the type alias definition (eg, so you can see it if you don't have a copy of the source)
> I have 20+ years of experience writing C++.
> Yes, I've looked at "core" C++ headers and source.
Any specific issues? I didn’t see any. No offense. One factor may be that Arch prioritizes not patching upstream - helped save them from targeting here, and it doesn’t go overboard with default configs, which I’ve long appreciated.
Not to distro-war, I’m very grateful for Debian. My background is finding Linux in the mid-00s and breaking many SuSE, Ubuntu, and one or two Debian systems before finding something I could understand, repair, and maintain in 2008 Arch.
systemd accentuated its ability to stay relevant with enterprise Linux, made it even easier to package for, and has been a useful tool in diagnosing service issues and managing bad software for me.
I’m not sure how often it’s posted here but Benno Rice formerly of FreeBSD Core Team has an excellent and amusing discussion of systemd’s technical merits.
IMO he makes a couple good points (and a couple poor ones), but it’s about everything except technical merits. It’s more about social and philosophical aspects.
“Tree-shaking” as commonly used implies that the granularity of the removal is functions, whereas dead-code elimination can generally be at arbitrarily fine levels, for example eliminating branches of conditional expressions, and can be based on all kinds of static program analyses.