It seems like the author isn't really defending against the bulk of the criticisms, which revolve around correctness and reliability, and instead highlights 'cool' language features. If you can't trust gradients or probability distributions / sampling to be correct, that's a pretty damning verdict for a language aimed at numerical calculations. The fact that it has some traction in academia (where the link from incorrect computation results to real-world consequences is extremely long-winded and hard to attribute) is worrying from that perspective. Imagine Boeing were to use Julia for its computations (not unlikely given that it has a great repuation for differential equation solvers).
> If you can't trust gradients or probability distributions / sampling to be correct, that's a pretty damning verdict for a language aimed at numerical calculations.
If you draw the distinction between Julia the language and Julia the ecosystem, then that's not a damning verdict for the language, even if we accept the premise that you can't trust the computations you mentioned.
The way that Julia composes is unprecedented. Typical examples include computing with measurement uncertainty, or big floats through functions that were not explicitly designed to support those types. This is great, but the correctness of this composition cannot be assumed.
One good proxy for how this composition occurs is in many type combinations. As Here is a concrete, made-up example. There is a function you've been using without any problem that takes an argument of type `DenseArray{Float64, 2}`. (Meaning, the function gets called with an argument of that type. The function definition is not constrained to work on that type.)
If you've never focused on the correctness (e.g. reviewing the code, unit tests) of this function to see what it does on `SparseArray{BigFloat, 2}`, I would not take it for granted that it works.
This is perhaps too big a burden on a user. Some might say they rather not have the compositionality unless it's guaranteed to be correct. But, if you're actually asking me to imagine using Julia in a situation where I really want to be sure about the correctness of a Julia program, I would at the very attempt to characterize all the expected types to go into any function. If at runtime a function is called with a new type, I would log that, and possibly error out.
Note this is related to, but not equivalent to what's necessary to do "static" compilation in Julia.
> This is perhaps too big a burden on a user. Some might say they rather not have the compositionality unless it's guaranteed to be correct.
I think this is it. Or rather, I think to some people, myself included, compositionality should imply correctness. If you /can/ compose things, but the result is wrong, can you really say that you've got composition?
How useful it is really to be able to plug in your own types in other peoples libraries if you have to trace through the execution of that library, figure out which libraries they transitively use, to ensure that all of your instantiations are sound? How do you even test this properly?
It's a really hard problem, and from what I can tell, Julia gives you no tools to deal with this.
[0] is probably relevant here, although I'm not sure I share the positive outlook.
> If you draw the distinction between Julia the language and Julia the ecosystem
Language and ecosystem are so intertwined that I almost never do this. Maybe when there are multiple well developed ecosystems, but when there’s only one, then it and the language are inseparable to me (for any language, not just Julia).
Thanks for the explanation. Yeah, that is an interesting feature. Sounds like you'll have to do a lot of reading documentations and testing other packages yourself then. A typing system that allows defining accepted, rejected and 'other' data types for functions (and a compiler flag for either warning about the latter or not) would be interesting.
But the original critique also mentioned several correctness issues in the core language itself, so the issue does seem to go deeper.
Bugs are everywhere - including Python and R libraries. You would not design a safety critical system in Python using libraries and codepaths that are not heavily tested. Even then, you still have to carefully design testsuites that give you confidence that what you are computing is what is expected.
I'm sure Boeing uses Julia somewhere for its computations... and they have extensive testing to support that. I recall when the PR folks at SAS, a statistical tool, said that nobody would trust a Boeing plane that had been developed using R (an open source alternative to SAS), and Boeing replied that they used R to design airplanes.
Besides that the SAS vs R is something still ongoing. I heard the aame point made in favour of SAS in Public Health, Clinical trials analysis and set up and different public administration. They trust the algorithms provided by a closed source certified vendors and not an open source alternative.
At the same time, correct implementation of statistics is hard and testing it to be correct is harder.
Can you point to a concrete example of one that someone would run into when using the differential equation solvers with the default and recommended Enzyme AD for vector-Jacobian products? I'd be happy to look into it, but there do not currently seem to be any correctness issues in the Enzyme issue tracker that are current (3 issues are open but they all seem to be fixed, other than https://github.com/EnzymeAD/Enzyme.jl/issues/278 which is actually an activity analysis bug in LLVM). So please be more specific. The issue with Enzyme right now seems to moreso be about finding functional forms that compile, and it throws compile-time errors in the event that it cannot fully analyze the program and if it has too much dynamic behavior (example: https://github.com/EnzymeAD/Enzyme.jl/issues/368).
Additional note, we recently did a overhaul of SciMLSensitivity (https://sensitivity.sciml.ai/dev/, as part of the new https://docs.sciml.ai/dev/) and setup a system which amounts to 15 hours of direct unit tests doing a combinatoric check of arguments with 4 hours of downstream testing (https://github.com/SciML/SciMLSensitivity.jl/actions/runs/25...). What that identified is that any remaining issues that can arise are due to the implicit parameters mechanism in Zygote (Zygote.params). To counteract this upstream issue, we (a) try to default to never default to Zygote VJPs whenever we can avoid it (hence defaulting to Enzyme and ReverseDiff first as previously mentioned), and (b) put in a mechanism for early error throwing if Zygote hits any not implemented derivative case with an explicit error message (https://github.com/SciML/SciMLSensitivity.jl/blob/v7.0.1/src...). We have alerted the devs of the machine learning libraries, and from this there has been a lot of movement. In particular, a globals-free machine learning library, Lux.jl, was created with fully explicit parameters https://lux.csail.mit.edu/dev/, and thus by design it cannot have this issue. In addition, the Flux.jl library itself is looking to do a redesign that eliminates implicit parameters (https://github.com/FluxML/Flux.jl/issues/1986). Which design will be the one in the end, that's uncertain right now, but it's clear that no matter what the future designs of the deep learning libraries will fully cut out that part of Zygote.jl. And additionally, the other AD libraries (Enzyme and Diffractor for example) do not have this "feature", so it's an issue that can only arise from a specific (not recommended) way of using Zygote (which now throws explicit error messages early and often if used anywhere near SciML because I don't tolerate it). tl;dr: don't use Zygote.params, the ML AD devs know this its usage is being removed ASAP.
So from this, SciML should be rather safe and if not, please share some details and I'd be happy to dig in.
I really like Julia, and I think this article does a great job of defending its design decisions. But I'm a bit disappointed that it doesn't address Yuri's main criticism, which is that there's some worrying correctness bugs in Julia that aren't addressed as well as they should be.
I'm no language expert, so I can't tell if Yuri's criticisms are just a function of the smaller number of people working on Julia vs a very popular language like Python, or whether the failings he discusses stand out because so many things Just Work in Julia, or whether it indicates some much deeper problem, so I've gone back to look at the original thread - https://news.ycombinator.com/item?id=31396861
I'm interested to hear what others think of Julia's longevity prospects. Its popularity in climate modeling and other scientific contexts suggest it will be around for the long haul, and that Julia projects needn't end up as abandonware for lack of footing. Is that accurate?
Yuri's criticism was not that Julia has correctness bugs as a language, but that certain libraries when composed with common operations had bugs (many of which are now addressed).
I will recommend following the discussion on the Julia discourse here that is focussed on productively and constructively addressing the issues (while also discussing them in the right context):
https://discourse.julialang.org/t/discussion-on-why-i-no-lon...
Just like any other open source project, Julia has packages that get well adopted, well maintained, and packages that get abandoned, picked up later, alternatives sprout and so on. The widely used packages usually do not have this problem. Overall the trend is that the user base and contributor base are growing.
I think you must be misremembering. There were several issues linked in his post that were bugs in the core language itself.
I can find you several more, if you want. In the last 2 years I've myself filed something like 6 or 7 correctness bugs in Julia itself (not libraries), and hit at least 2 dozen, whereas I've never found a correctness bug in Python despite using it daily for 5 years.
Right now, you can go to the CodeCov of Julia and find entire functions that are simply not tested. Many of those, and they are in plain sight. And it would take less than an hour to find a dozen correctness bugs that are filed, known about, agreed to be a bug, tractable, yet still not put on the milestone for the next Julia release, which means the next Julia release will knowingly include these bugs.
I just don't know how people can see these facts and still claim Julia cares a lot about correctness. It's just not true.
If you want something actionable, here are three suggestions:
1) Do not release Julia 1.9 until codecov is at 100% (minus OS-specific branches etc.)
2) Solicit a list of tractable correctness bugs from the community and put all the ones that are agreed to be bugs and that are solvable on the 1.9 milestone.
3) Thoroughly document the interface of every exported abstract type, the print/show/display system, and other prominent interfaces, do not release 1.9 before this is done.
Edit: I apologize for implying you were not being genuine. That was uncalled for.
If I may Viral, I suspect one takeaway from Yuri's criticism (at least it was a takeaway for me) is that with multiple dispatch correctness bugs like the ones listed are hard to find (impossible even). How would you respond to that criticism?
In my opinion, better tooling to assist with such cases would help tremendously. Adding support for interfaces to `Base` would be a great start. What are your thoughts about this?
Also, it's been a while since we've seen a roadmap on what the core team is working on? What are the next big features we can expect from the language and what is the approximate timeline for that? Having answers to these questions would be extremely helpful.
> Yuri's criticism was not that Julia has correctness bugs as a language
Are you sure? Here are some issues from the post:
"Wrong if-else control flow" seems like a language issue? bug is still open [0]
"Wrong results since some copyto! methods don’t check for aliasing" seems like a bug in a core library. The bug, which is filed against Julia, not some third-party library, is still open [1]
"The product function can produce incorrect results for 8-bit, 16-bit, and 32-bit integers" was a bug in a core library, which was fixed [2]
"Base functions sum!, prod!, any!, and all! may silently return incorrect results" seems like a bug in a core library and is still open [3]
"Off-by-one error in dayofquarter() in leap years" seems like a bug in a core library which was fixed [4]
"Pipeline with stdout=IOStream writes out of order" seems like a bug in a core library and is still open [5]
I've been deliberately conservative here and only posted the issues from Yuri's post that are in the JuliaLang/julia repository. The other issues are filed against JuliaStats/Distributions.jl, JuliaStats/StatsBase.jl, JuliaCollections/OrderedCollections.jl, and JuliaPhysics/Measurements.jl. Since I have not used Julia very much, I don't know whether these are commonly used libraries or obscure libraries nobody uses, but they seem pretty close to the core use-cases of the language. Maybe someone who uses the language a lot more can shed some light on this issue.
Some commenters seem exhausted by what they perceive as a continual stream of lies about these topics, which has left them less inclined to post about them.
I think the language has amassed a nearly unassailable lead in differential equations, scientific machine learning, and (to a lesser degree) optimization. That lead hinges on advantages of the language, so I think others are unlikely to catch up. The story for working with GPUs is also really good compared to e.g. Python. I think those things alone are enough to keep the language around for a while.
> One such operation can be to sell. The sell function can do things like marking the book object as sold
Maybe this isn’t a good example of OOP?
Does it suggest the author perhaps doesn't fully understand principles behind loosely coupled OO code?
If is_sold is an attribute of book, the class shouldn't have a sell method, but a mark_as_sold.
The action of selling is not the responsibility of a Book. Baking it into Book will create tight coupling between areas of the code (product and commerce) that should be loosely coupled. The "commerce" side may change for reasons unrelated to the "product" side. Keep'em separated.
And if we think for a minute, is_sold should not be a Book attribute in the first place. It seems to me this piece of information should be handled somewhere else related to inventory, not by the Book itself.
There should be a ProductEntry class, for instance. It could have a product_type and product_sku attributes, for example, pointing to a Book.
Even here, is_sold shouldn't be an attribute, but a method. It would have an availability_status attribute.
Ultimately, the inventory code should not care that it's a Book or whatever.
The logic of whether it's available or not doesn't belong to the product itself. What if someone bought, but returned it? Maybe it will be available for shipping tomorrow and the store wants to start reselling it already? Is it the Book responsibility to track that? No way.
Seems to me that you are describing something that should be a relational database, and where object orientation doesn't seem to bring anything of value.
That OP just pointed how the provided example would actually work in an OOP project, not their fault to have to work with an example that is indeed an inventory management problem.
To phrase succinctly the (direct) counter to the previous blog post: Julia makes possible the kind of combinatorial composition (and therefore, radical modularity & reuse) that is simply not possible in most languages. This will lead to some friction as the community figures out the right design patterns in these uncharted waters, but on the flip side one already gets many superpowers, provided one is careful about testing the composition works as expected, rather than just blindly assuming it does.
> Finally, it is also likely that CPUs will become faster over time and that tooling around reducing TTFX will improve.
I don't agree with this as a mitigation. This line of reasoning is why despite steady hardware improvements over the past few decades, responsiveness of (PC) programs and websites have stagnated or regressed.
To the main TTFX issue - I won't consider Julia until this is taken seriously.
The load times on some core packages were reduced by an order of magnitude this month. For example, RecursiveArrayTools went from 6228.5 ms to 292.7 ms. This was due to the new `@time_imports` in the Julia v1.8-beta helping to isolate load time issues. See https://github.com/SciML/RecursiveArrayTools.jl/pull/217 . And that's just one of many packages which have seen this change just this month. This of course doesn't mean load times have been solved everywhere, but we now have the tooling to identify the root causes and it's actively being worked on from multiple directions.
I totally agree, leaning on Moore's Law to solve performance issue doesn't work. When you have performance issues on the language itself, you're essentially paying a performance tax on every piece of code that executes, in a compounding way. So if Julia became a widespread programming language for general applications, you have those small delays leaking into all sorts of operations, and before you know it your whole system feels slow and bogged down.
I have been testing some performance oriented software recently, and it's amazing how much more productive I feel when every keypress is immediately reflected on screen with no delay. We can adapt to poor performance, but it adds a constant cognitive load to deal with it.
In my mind Julia is suited for limited applications, like doing work in a jupyter notebook, but is not suited for general applications unless the TTFX issue can be fixed.
In my opinion, there is no such thing as a "general application". What about generating static websites? Is that a "general application"? Is TTFX a big issue there? Currently, the Julia package Franklin.jl takes around 15 seconds to create the website the first time (changes in the source code update the page in less than one second). I'd argue no.
What about Julia to produce generative art? Is TTFX an issue there? Etc. If "general applications" means _all possible applications_, then I'm not sure any language is suitable for general applications.
Julia has weaknesses which limits what it's good at. Right now I think it has more weaknesses than other languages. Some of these limits will hopefully be removed or reduced in the future. But fundamentally, I don't see Julia as some kind of domain specific or niche language. Just like any language, it just has its own set of tradeoffs.
It seems very similar to what's going on with Java these days.
If your plan is to do bytecode verification and class-loading only once requests are coming in, and to handle the first however many requests with interpretive execution (prior to JIT), you shouldn't be surprised to see poor performance for those first requests.
Now, ahead-of-time compilation in the JVM is becoming more mainstream, and brings the expected benefits to, for instance, start-up time. [0]
The quote provided explicitly suggests that tooling around reducing TTFX should also improve. It’s not suggesting improved CPU power as the sole mitigation.
> Based on the amount and the ferocity of the comments, it is natural to conclude that Julia as a whole must produce incorrect results and therefore cannot be a productive environment. However, the scope of the blog post and the discussions are narrow. In general, I still recommend Julia because it is fundamentally productive and, with care, correct.
If TFA really means "Most of the time if statements take the correct branch, and it's unusual for them to take the incorrect branch, and you should use tests to detect whether the if statements in your program are working" then I would appreciate it if TFA would come out and say that. I am sort of used to programming in environments in which if statements work 100% of the time.
Aside from me disagreeing with this article here in that I don't think "all of julia is incorrect" is a valid take from Yuri's original article, I'm not sure how you arrive at "if branches don't do what I think they do" from that sentence. Are you referring to one (already fixed) bug[0]?
I am also thinking about an issue where catch randomly does not catch exceptions. I have not found the right issue in the issue tracker, so I don't know whether this is fixed.
I don't see how it addresses the original complaint. Vishnevsky basically stated that if you are trying to run a scientific experiment on a supercomputer, maybe it's a risky idea to use a new programming language with a new stdlib and a bunch of OSS libraries vs using an old language like C with very stable set of existing code because new things tend to have unknown bugs? Vishnevsky has a point, but unless you are running some critical computations on supercomputers, maybe it doesn't apply to you?
To be clear, in supercomputing environments people still use old versions of CentOS just to make sure that library version updates do not change their computation results. I don't think many people here would say "I am sticking to Ubuntu 16.04 because I am afraid that the updates to some library like gmplib will slightly change my computation results in a way that is hard for me to detect".
Also, just staying with the old doesn't mean it's correct. You can also introduce bugs to your libs. I think NASA thought this through long time ago and solved it by making sure critical parts of the code are implemented twice using different stacks with different programmers.
If you are NASA, CERN, LLNL, or a bank, maybe it's a good idea to implement your computations once in Python and once in Julia (by at least two different programmers) and compare the outputs. And I don't think in this situation Julia is any different from other languages (other than you may put too much trust into it and skip this dual implementation). Case in point: https://github.com/scipy/scipy/issues?q=is%3Aissue+is%3Aclos...
> If you are NASA, CERN, LLNL, or a bank, maybe it's a good idea to implement your computations once in Python and once in Julia
Doesn’t this negate one of Julia’s main selling points? That it has “solved the two-language problem”. Ironic for them to solve that in the performance domain only to then need a second language to prove correctness.
Note that with the Julia differential equation solvers, you can, without rewriting your code, test it with SciPy's solvers, MATLAB's solvers, C solvers (CVODE), Fortran solvers (Hairer's, lsoda, etc.), and the pure Julia methods (of which there are many, and many different libraries). https://diffeq.sciml.ai/stable/solvers/ode_solve/ Even a few methods for quantum computers. This also includes methods with uncertainty quantification (Taylor methods with interval setups, probabilistic numerics). So no, you can run these kinds of checks without rewriting your model. (Of course, some of the solvers to check against will be really slow, but this is about correctness and the ability to easily check correctness)
I interpreted it more that, in domains where correctness is vital (rather than "good enough"), you want more than one implementation no matter what languages you use.
Maybe that's not what parent was going for, but I think it's like the reproducible/replicable difference in research... can you use the author's code and data, getting the same result... can you use the author's algorithm/pseudocode and data, and get the same result... can you use the author's algorithm/code and different data, and get an _equivalent_ result?
The more code is used, the more bugs are found. So new code is expected to contain more unknown bugs than old, well-used code. But this is a function of usage, not of time itself.
I just came across Yuri's criticism for the first time, but it makes sense to me. I am not a user of Julia, but have followed it with interest since they published their first paper about it. With hindsight, it is clear that they would run into correctness issues due to their powerful composability features. The solution is obvious and hard at the same time: there must be a way to PROVE correctness. Of course, to incorporate a prover into Julia will be pretty hard, it is probably much easier to incorporate (some of) the ideas of Julia into a new shiny prover.
If you draw the distinction between Julia the language and Julia the ecosystem, then that's not a damning verdict for the language, even if we accept the premise that you can't trust the computations you mentioned.
The way that Julia composes is unprecedented. Typical examples include computing with measurement uncertainty, or big floats through functions that were not explicitly designed to support those types. This is great, but the correctness of this composition cannot be assumed.
One good proxy for how this composition occurs is in many type combinations. As Here is a concrete, made-up example. There is a function you've been using without any problem that takes an argument of type `DenseArray{Float64, 2}`. (Meaning, the function gets called with an argument of that type. The function definition is not constrained to work on that type.)
If you've never focused on the correctness (e.g. reviewing the code, unit tests) of this function to see what it does on `SparseArray{BigFloat, 2}`, I would not take it for granted that it works.
This is perhaps too big a burden on a user. Some might say they rather not have the compositionality unless it's guaranteed to be correct. But, if you're actually asking me to imagine using Julia in a situation where I really want to be sure about the correctness of a Julia program, I would at the very attempt to characterize all the expected types to go into any function. If at runtime a function is called with a new type, I would log that, and possibly error out.
Note this is related to, but not equivalent to what's necessary to do "static" compilation in Julia.
I think this is it. Or rather, I think to some people, myself included, compositionality should imply correctness. If you /can/ compose things, but the result is wrong, can you really say that you've got composition?
How useful it is really to be able to plug in your own types in other peoples libraries if you have to trace through the execution of that library, figure out which libraries they transitively use, to ensure that all of your instantiations are sound? How do you even test this properly?
It's a really hard problem, and from what I can tell, Julia gives you no tools to deal with this.
[0] is probably relevant here, although I'm not sure I share the positive outlook.
[0]: http://www.jerf.org/iri/post/2954
Language and ecosystem are so intertwined that I almost never do this. Maybe when there are multiple well developed ecosystems, but when there’s only one, then it and the language are inseparable to me (for any language, not just Julia).
But the original critique also mentioned several correctness issues in the core language itself, so the issue does seem to go deeper.
Bugs are everywhere - including Python and R libraries. You would not design a safety critical system in Python using libraries and codepaths that are not heavily tested. Even then, you still have to carefully design testsuites that give you confidence that what you are computing is what is expected.
See these talks on how a collision avoidance system was designed with Julia. https://www.youtube.com/watch?v=rj-WhTL_VXEhttps://www.youtube.com/watch?v=19zm1Fn0S9M
At the same time, correct implementation of statistics is hard and testing it to be correct is harder.
Additional note, we recently did a overhaul of SciMLSensitivity (https://sensitivity.sciml.ai/dev/, as part of the new https://docs.sciml.ai/dev/) and setup a system which amounts to 15 hours of direct unit tests doing a combinatoric check of arguments with 4 hours of downstream testing (https://github.com/SciML/SciMLSensitivity.jl/actions/runs/25...). What that identified is that any remaining issues that can arise are due to the implicit parameters mechanism in Zygote (Zygote.params). To counteract this upstream issue, we (a) try to default to never default to Zygote VJPs whenever we can avoid it (hence defaulting to Enzyme and ReverseDiff first as previously mentioned), and (b) put in a mechanism for early error throwing if Zygote hits any not implemented derivative case with an explicit error message (https://github.com/SciML/SciMLSensitivity.jl/blob/v7.0.1/src...). We have alerted the devs of the machine learning libraries, and from this there has been a lot of movement. In particular, a globals-free machine learning library, Lux.jl, was created with fully explicit parameters https://lux.csail.mit.edu/dev/, and thus by design it cannot have this issue. In addition, the Flux.jl library itself is looking to do a redesign that eliminates implicit parameters (https://github.com/FluxML/Flux.jl/issues/1986). Which design will be the one in the end, that's uncertain right now, but it's clear that no matter what the future designs of the deep learning libraries will fully cut out that part of Zygote.jl. And additionally, the other AD libraries (Enzyme and Diffractor for example) do not have this "feature", so it's an issue that can only arise from a specific (not recommended) way of using Zygote (which now throws explicit error messages early and often if used anywhere near SciML because I don't tolerate it). tl;dr: don't use Zygote.params, the ML AD devs know this its usage is being removed ASAP.
So from this, SciML should be rather safe and if not, please share some details and I'd be happy to dig in.
I'm no language expert, so I can't tell if Yuri's criticisms are just a function of the smaller number of people working on Julia vs a very popular language like Python, or whether the failings he discusses stand out because so many things Just Work in Julia, or whether it indicates some much deeper problem, so I've gone back to look at the original thread - https://news.ycombinator.com/item?id=31396861
I'm interested to hear what others think of Julia's longevity prospects. Its popularity in climate modeling and other scientific contexts suggest it will be around for the long haul, and that Julia projects needn't end up as abandonware for lack of footing. Is that accurate?
I will recommend following the discussion on the Julia discourse here that is focussed on productively and constructively addressing the issues (while also discussing them in the right context): https://discourse.julialang.org/t/discussion-on-why-i-no-lon...
Just like any other open source project, Julia has packages that get well adopted, well maintained, and packages that get abandoned, picked up later, alternatives sprout and so on. The widely used packages usually do not have this problem. Overall the trend is that the user base and contributor base are growing.
I can find you several more, if you want. In the last 2 years I've myself filed something like 6 or 7 correctness bugs in Julia itself (not libraries), and hit at least 2 dozen, whereas I've never found a correctness bug in Python despite using it daily for 5 years.
Right now, you can go to the CodeCov of Julia and find entire functions that are simply not tested. Many of those, and they are in plain sight. And it would take less than an hour to find a dozen correctness bugs that are filed, known about, agreed to be a bug, tractable, yet still not put on the milestone for the next Julia release, which means the next Julia release will knowingly include these bugs.
I just don't know how people can see these facts and still claim Julia cares a lot about correctness. It's just not true.
If you want something actionable, here are three suggestions:
1) Do not release Julia 1.9 until codecov is at 100% (minus OS-specific branches etc.)
2) Solicit a list of tractable correctness bugs from the community and put all the ones that are agreed to be bugs and that are solvable on the 1.9 milestone.
3) Thoroughly document the interface of every exported abstract type, the print/show/display system, and other prominent interfaces, do not release 1.9 before this is done.
Edit: I apologize for implying you were not being genuine. That was uncalled for.
In my opinion, better tooling to assist with such cases would help tremendously. Adding support for interfaces to `Base` would be a great start. What are your thoughts about this?
Also, it's been a while since we've seen a roadmap on what the core team is working on? What are the next big features we can expect from the language and what is the approximate timeline for that? Having answers to these questions would be extremely helpful.
Are you sure? Here are some issues from the post:
"Wrong if-else control flow" seems like a language issue? bug is still open [0]
"Wrong results since some copyto! methods don’t check for aliasing" seems like a bug in a core library. The bug, which is filed against Julia, not some third-party library, is still open [1]
"The product function can produce incorrect results for 8-bit, 16-bit, and 32-bit integers" was a bug in a core library, which was fixed [2]
"Base functions sum!, prod!, any!, and all! may silently return incorrect results" seems like a bug in a core library and is still open [3]
"Off-by-one error in dayofquarter() in leap years" seems like a bug in a core library which was fixed [4]
"Pipeline with stdout=IOStream writes out of order" seems like a bug in a core library and is still open [5]
I've been deliberately conservative here and only posted the issues from Yuri's post that are in the JuliaLang/julia repository. The other issues are filed against JuliaStats/Distributions.jl, JuliaStats/StatsBase.jl, JuliaCollections/OrderedCollections.jl, and JuliaPhysics/Measurements.jl. Since I have not used Julia very much, I don't know whether these are commonly used libraries or obscure libraries nobody uses, but they seem pretty close to the core use-cases of the language. Maybe someone who uses the language a lot more can shed some light on this issue.
Some commenters seem exhausted by what they perceive as a continual stream of lies about these topics, which has left them less inclined to post about them.
[0]: https://github.com/JuliaLang/julia/issues/41096
[1]: https://github.com/JuliaLang/julia/issues/39460
[2]: https://github.com/JuliaLang/julia/issues/39183
[3]: https://github.com/JuliaLang/julia/issues/39385
[4]: https://github.com/JuliaLang/julia/pull/36543
[5]: https://github.com/JuliaLang/julia/issues/36069
Deleted Comment
Maybe this isn’t a good example of OOP?
Does it suggest the author perhaps doesn't fully understand principles behind loosely coupled OO code?
If is_sold is an attribute of book, the class shouldn't have a sell method, but a mark_as_sold.
The action of selling is not the responsibility of a Book. Baking it into Book will create tight coupling between areas of the code (product and commerce) that should be loosely coupled. The "commerce" side may change for reasons unrelated to the "product" side. Keep'em separated.
And if we think for a minute, is_sold should not be a Book attribute in the first place. It seems to me this piece of information should be handled somewhere else related to inventory, not by the Book itself.
There should be a ProductEntry class, for instance. It could have a product_type and product_sku attributes, for example, pointing to a Book.
Even here, is_sold shouldn't be an attribute, but a method. It would have an availability_status attribute.
func is_sold(self) { return self.status == ProductStatus.SOLD }
Ultimately, the inventory code should not care that it's a Book or whatever.
The logic of whether it's available or not doesn't belong to the product itself. What if someone bought, but returned it? Maybe it will be available for shipping tomorrow and the store wants to start reselling it already? Is it the Book responsibility to track that? No way.
Your critique seems to prove the author's claim.
I don't agree with this as a mitigation. This line of reasoning is why despite steady hardware improvements over the past few decades, responsiveness of (PC) programs and websites have stagnated or regressed.
To the main TTFX issue - I won't consider Julia until this is taken seriously.
I have been testing some performance oriented software recently, and it's amazing how much more productive I feel when every keypress is immediately reflected on screen with no delay. We can adapt to poor performance, but it adds a constant cognitive load to deal with it.
In my mind Julia is suited for limited applications, like doing work in a jupyter notebook, but is not suited for general applications unless the TTFX issue can be fixed.
What about Julia to produce generative art? Is TTFX an issue there? Etc. If "general applications" means _all possible applications_, then I'm not sure any language is suitable for general applications.
Julia has weaknesses which limits what it's good at. Right now I think it has more weaknesses than other languages. Some of these limits will hopefully be removed or reduced in the future. But fundamentally, I don't see Julia as some kind of domain specific or niche language. Just like any language, it just has its own set of tradeoffs.
If your plan is to do bytecode verification and class-loading only once requests are coming in, and to handle the first however many requests with interpretive execution (prior to JIT), you shouldn't be surprised to see poor performance for those first requests.
Now, ahead-of-time compilation in the JVM is becoming more mainstream, and brings the expected benefits to, for instance, start-up time. [0]
[0] https://spring.io/blog/2021/12/09/new-aot-engine-brings-spri...
If TFA really means "Most of the time if statements take the correct branch, and it's unusual for them to take the incorrect branch, and you should use tests to detect whether the if statements in your program are working" then I would appreciate it if TFA would come out and say that. I am sort of used to programming in environments in which if statements work 100% of the time.
[0]: https://github.com/JuliaLang/julia/issues/41096
To be clear, in supercomputing environments people still use old versions of CentOS just to make sure that library version updates do not change their computation results. I don't think many people here would say "I am sticking to Ubuntu 16.04 because I am afraid that the updates to some library like gmplib will slightly change my computation results in a way that is hard for me to detect".
Also, just staying with the old doesn't mean it's correct. You can also introduce bugs to your libs. I think NASA thought this through long time ago and solved it by making sure critical parts of the code are implemented twice using different stacks with different programmers.
If you are NASA, CERN, LLNL, or a bank, maybe it's a good idea to implement your computations once in Python and once in Julia (by at least two different programmers) and compare the outputs. And I don't think in this situation Julia is any different from other languages (other than you may put too much trust into it and skip this dual implementation). Case in point: https://github.com/scipy/scipy/issues?q=is%3Aissue+is%3Aclos...
Doesn’t this negate one of Julia’s main selling points? That it has “solved the two-language problem”. Ironic for them to solve that in the performance domain only to then need a second language to prove correctness.
Maybe that's not what parent was going for, but I think it's like the reproducible/replicable difference in research... can you use the author's code and data, getting the same result... can you use the author's algorithm/pseudocode and data, and get the same result... can you use the author's algorithm/code and different data, and get an _equivalent_ result?
Deleted Comment
Guess what also has unknown bugs? Old C code.
This is in no way specific to Julia.
I think these features are great, but on their own they lead to exactly the situation as described.