Readit News logoReadit News

Deleted Comment

rbehrends commented on Kotlin-Lsp: Kotlin Language Server and Plugin for Visual Studio Code   github.com/Kotlin/kotlin-... · Posted by u/todsacerdoti
twen_ty · 3 months ago
Apart from legacy projects written in Kotlin, after Java 21/23, what's the argument for using Kotlin anymore, especially that it's a proprietary language?
rbehrends · 3 months ago
Aside from the often cited nullability issue, here is an (incomplete) list of important things that Kotlin still does better than Java:

- First class, fully functional closures. - Non-abstract classes and methods are final by default. - Named parameters. - Easy to write iterators via sequence { ... } - First class support for unsigned types.

rbehrends commented on Conservative GC can be faster than precise GC   wingolog.org/archives/202... · Posted by u/diegocg
nu11ptr · a year ago
I've never understood why anyone would use a conservative collector outside toy programs or academia. It is hard enough to make programs deterministic even with precise collection. I can't even imagine releasing software that was inherently non-deterministic and could suddenly, and without notice, start retaining memory (even if atypical in practice). Thus, IMHO, which is faster is a moot point.
rbehrends · a year ago
The same argument would apply to any non-compacting allocator, because the worst case memory blowup due to external fragmentation is huge. But such cases are extremely rarely observed in practice, so people use e.g. standard malloc()/free() implementations or non-compacting garbage collectors without being concerned about that.

In addition, there are plenty of cases where memory usage is unbounded or excessive, not because of allocator behavior, but because of programming mistakes. In fact, memory can sometimes blow up just because of large user inputs and very few systems are prepared for properly handling OOM conditions that happen legitimately.

Case in point: Both CRuby and Apple's JavaScriptCore have garbage collectors that use conservative stack scanning and are widely used in production systems without the world ending.

That said, you're probably not going to use conservative stack scanning because of collection speed alone. There are other trade-offs between conservative stack scanning and precise stack scanning that weigh more heavily.

I'll add the caveat that I would be very cautious about using a conservative GC on 32-bit systems, but on 64-bit systems this is about as much of a concern as memory fragmentation.

rbehrends commented on Kotlin for data analysis   kotlinlang.org/docs/data-... · Posted by u/saikatsg
yen223 · a year ago
The mechanism in Kotlin that allows them to limit yield() to a sequence{} block , without introducing a new keyword, is pretty dang cool.
rbehrends · a year ago
What happens under the hood is that a `sequence {}` call creates an instance of `SequenceScope`, which has `yield()` and `yieldAll()` methods. When executing the block, `this` will reference that particular instance and `yield()` is essentially `this.yield()` and will call the method on the scope instance.

The actual functionality is then provided by the coroutine system, though a lot of the heavy lifting is done by the optimizer to eliminate all or most of the runtime overhead.

rbehrends commented on Kotlin for data analysis   kotlinlang.org/docs/data-... · Posted by u/saikatsg
Blot2882 · a year ago
I'm not sure how someone could see Kotlin as more expressive than Python, unless I am misinterpreting what expressive means. Python has a good language features and helpful abstractions like list comprehensions.

What makes Kotlin more expressive? I understand it has some functional features but I've never seen anything dramatically flexible.

rbehrends · a year ago
As somebody who uses and likes both Kotlin and Python (and quite a few other languages), I'd be cautious with using a subjective term such as "more expressive", too, but I can possibly shed some light on where such feelings come from.

Personally, I see Kotlin as the closest thing to a statically typed Smalltalk that we have among major languages, and that's a major draw.

A key part here is that Kotlin closures are fully-featured equivalents of Smalltalk blocks (up to and including even non-local returns [1]), whereas in many other languages that falls short. Java does not allow mutation of local variables and Python restricts lambdas to normal expressions.

I find code whose behavior can be parameterized by code to be an essential feature of modern-day programming and this should be as frictionless as possible.

This is also a situation where syntax matters, and while it isn't quite as nice as Smalltalk, Kotlin's syntax (esp. with trailing closures) make such code as readable as possible in a brace-style language with minimal additional syntactic noise.

In a similar vein, the functionality of Smalltalk's cascades is offered through scope functions [2], especially `.run {}`.

But ultimately, fully-featured closures (and the fact that they are widely used in the standard library) power a lot of the things that people seem to like about Kotlin.

That does not mean that there aren't downsides. The limitations of running on the JVM are one (e.g. while Kotlin has workarounds for the JVM's type erasure, they're still workarounds), and then Gradle is arguably Kotlin's weakest point (which apparently even JetBrains are seeing, given their investment in Amper).

That said, personally I'd say Kotlin's static typing and performance would be the primary reasons for me to reach for Kotlin over Python, not necessarily expressiveness. Type annotations in Python + mypy etc. just aren't the same experience, and writing performance-sensitive code in Python can be very tricky/hacky when you can't delegate the hot paths to numpy or other existing C/C++/Rust libraries.

Conversely, Python often has a leg up when it comes to fast prototyping and scripting, even with Kotlin Worksheets in IntelliJ IDEA and with kscript.

[1] Which, to be clear, are a nice-to-have thing, not essential, but still impressive that even that was covered, when previously Ruby was the only major language I know of that did it.

[2] https://kotlinlang.org/docs/scope-functions.html

rbehrends commented on Kotlin for data analysis   kotlinlang.org/docs/data-... · Posted by u/saikatsg
poikroequ · a year ago
A bit of a contrived example, but something like this (two for statements):

[x*y for x in range (10) for y in range(10)]

It's not often, but occasionally there are moments where I'm writing code in Kotlin and wish I could use a list comprehension. I do prefer Kotlin overall, but there's a few things that I think would be "nice to have" from Python. Especially the yield keyword, such a wonderful way to write your own iterators.

rbehrends · a year ago
Like this?

  sequence { for (x in 0..<10) for (y in 0..<10) yield(x*y) }.toList()
Now, technically, Kotlin doesn't have list comprehensions, only the equivalent of generator expressions in Python, so you have to tack an extra `.toList()` on at the end if you want a list, but you can write pretty much any for comprehension in Python in a similar way in Kotlin.

On the other hand, you're not limited to for loops/ifs inside such a generator, but can use fairly arbitrary control flow.

rbehrends commented on Kotlin for data analysis   kotlinlang.org/docs/data-... · Posted by u/saikatsg
poikroequ · a year ago
I think both languages have their strengths. I love Kotlin for its functional programming (map, filter, etc) and strong static typing. But Python has some nice features as well, such as list comprehension, the 'yield' keyword, and annotations are super simple to implement.
rbehrends · a year ago
> the 'yield' keyword

Am I missing something here?

  $ cat fib.kts
  fun fib() = sequence {
    var a = 1; var b = 1
    while (true) {
      yield(a)
      a = b.also { b += a }
    }
  }

  println(fib().take(10).joinToString(" -> "))
  println(fib().first { it >= 50 })
  $ kotlin fib.kts
  1 -> 1 -> 2 -> 3 -> 5 -> 8 -> 13 -> 21 -> 34 -> 55
  55
Of course, yield() is a function in Kotlin, not a keyword, but the same functionality is there.

rbehrends commented on Borrow Checking, RC, GC, and the Eleven () Other Memory Safety Approaches   verdagon.dev/grimoire/gri... · Posted by u/todsacerdoti
dataflow · a year ago
> First of all, I recommend giving the paper a read

I'll put it on my list, thanks for the recommendation, but it really has no impact on my point (see next point).

> because I think you're misunderstanding the claim (plus, it is a very good paper)

Note: I wasn't criticizing the paper. I was criticizing the comment, which claimed "it’s better" to view these as special cases.

If it's not obvious what I mean, here's an analogy: it's the difference between having a great paper that reduces ~everything to category theory, vs. claiming "it's better" for the audience I'm talking to to view everything in terms of category theory. I can be impressed by the former while still vehemently disagreeing with the latter.

> For starters, RC absolutely can handle cycles (e.g. through trial deletion).

"Can handle" is quite the hedge. You "can" walk across the continent too, but at what cost?

> The most prominent example of a programming language that uses such an approach is probably Python.

You're saying Python uses RC to handle reference cycles, and doesn't need a GC for that? If so please ask them to update the documentation, because right now it specifically says "you can disable the collector if you are sure your program does not create reference cycles". https://docs.python.org/3/library/gc.html

> [...] hard real time [...]

Nobody said "real time". I just said "hard guarantee".

rbehrends · a year ago
> Note: I wasn't criticizing the paper. I was criticizing the comment, which claimed "it’s better" to view these as special cases.

I didn't assume you were. My note about it being a good paper was just a general "this is worth reading" recommendation.

> "Can handle" is quite the hedge. You "can" walk across the continent too, but at what cost?

It's not a hedge. You claimed that (tracing) GC can handle cycles, while RC was "the opposite", which I read to mean that you believe it cannot.

While we are at it, let's go through the basics of trial deletion.

Trial deletion first looks at possible candidates for objects involved in a cycle (in the original algorithm, those were objects whose RC got decremented without reaching zero). Then, you do a recursive decrement of their children's (and their children's children's, and so forth) reference counts.

Unlike with regular reference counting decrements, you visit children even if the reference count doesn't reach zero. The net result is that reference counts are reduced only along internal paths, but that objects that are still reachable from external paths have reference counts > 0 after that.

Thus, any object with a reference count of zero after this step must be part of an internal cycle and can be deleted. All other objects have their original reference counts restored.

Because trial deletion operates on reference counts differently, it's not something that you can easily implement as a library, which is why you don't see it much except when a language implementation chooses to go with reference counting over a tracing GC.

> You're saying Python uses RC to handle reference cycles, and doesn't need a GC for that? If so please ask them to update the documentation, because right now it specifically says "you can disable the collector if you are sure your program does not create reference cycles". https://docs.python.org/3/library/gc.html

This is a terminology thing. Python uses a variant (generational) trial deletion approach [1]. It's not a traditional tracing GC, and it's also not inaccurate, because GC can mean more than using a traditional tracing GC.

> Nobody said "real time". I just said "hard guarantee".

I was not sure what you meant, so I answered both, as you may have noticed.

[1] https://github.com/python/cpython/blob/796b3fb28057948ea5b98...

rbehrends commented on Borrow Checking, RC, GC, and the Eleven () Other Memory Safety Approaches   verdagon.dev/grimoire/gri... · Posted by u/todsacerdoti
dataflow · a year ago
I have to disagree? Lumping them together is like lumping together formal verification with machine learning...

Two fundamental characteristics of garbage collection (in pretty much every programmer's experience) are that (a) it can handle cycles (read: more general programs), and (b) it does not provide hard performance guarantees in the general case. Reference counting is literally the opposite, and that's exactly what people love about it.

rbehrends · a year ago
First of all, I recommend giving the paper a read, because I think you're misunderstanding the claim (plus, it is a very good paper). The claim is not that they are equivalent, but that tracing GC and reference counting are dual solutions to the same problem, two ends of the same spectrum if you will, with hybrid solutions existing in between.

Second, what you seem to consider to be fundamental characteristics of tracing GC and RC is not in fact so fundamental.

For starters, RC absolutely can handle cycles (e.g. through trial deletion). Such implementations may be difficult or impossible to implement as pure library solutions, but there is nothing that says it can't be done. The most prominent example of a programming language that uses such an approach is probably Python.

Nor does the claim that tracing GC cannot provide hard performance guarantees in the general case (while RC does) hold up under closer examination. Leaving aside the problem that it's already non-trivial to provide hard real time performance guarantees for malloc()/free() and ignoring the issue of cascading deletions, it doesn't hold under the more relaxed assumptions discussed downthread.

For starters, we have such predictability only for the single-threaded case, not for arbitrary multi-threaded situations. And even in the single-threaded case, there are real use cases where predicting performance becomes functionally intractable. Examples are implementations of binary decision diagrams or certain persistent data structures, where the presence of shared subgraphs of arbitrary size make predicting performance of individual deallocations impractical.

In contrast, in the single-threaded case we can absolutely bound individual operations of a tracing GC by either a constant or (in the case of arbitrarily sized allocations) make them linear in the number of bytes allocated (e.g. Baker's treadmill).

What is true is that in the absence of cycles, (naive) reference counting will free memory at the earliest opportunity, which is not something we can say for tracing GC.

rbehrends commented on Garbage collection for systems programmers (2023)   bitbashing.io/gc-for-syst... · Posted by u/ingve
pkolaczk · a year ago
Java cannot express immutability. `final` is not transitive, so nothing stops an unrelated change in code to break something that was immutable earlier. Same with Golang.
rbehrends · a year ago
You can express immutability simply by making all instance variables private and not providing any methods to modify them.

u/rbehrends

KarmaCake day2957December 27, 2011View Original