Readit News logoReadit News
eggy · 4 years ago
Before Python became ML's darling, there was Lush: Lisp Universal SHell by none other than Yann LeCun and Leon Bottouk[1], and I thought my Lisp knowledge was going to finally pay off ;) I've since moved on to J and APL. SBCL is amazing and aside from calling C directly, it is still very fast. Python is slow without its specialized libraries from the C-world, but SBCL produces short, readable code that is pretty efficient. There ECL[2] for an embedded Lisp, or uLisp[32] for really small processors like the ESP chips.

[1] http://lush.sourceforge.net/ [2] https://common-lisp.net/project/ecl/ [3] http://www.ulisp.com/

protomikron · 4 years ago
Why did it not catch on, though?

Python was already well known as a "scripting" and server-side web development language in the early 2000s, but it's commonly mentioned that it really exploded in the 2010s, where it was the implementation language for several scientific packages, most notably the machine learning eco system.

It seems that the language really found a local optimum that adheres to many different people across disciplines.

taeric · 4 years ago
My vote is it is due to python being included in most base distributions. Combined with a ridiculously loose concept of dependency management, it is trivial to tell people how to get started with many python projects.

This doesn't really scale well, but momentum has a way of making things scale. Especially when most of the popular libs of python are pseudo ports of other language libraries.

dragonwriter · 4 years ago
> Python was already well known as a "scripting" and server-side web development language in the early 2000s, but it's commonly mentioned that it really exploded in the 2010s, where it was the implementation language for several scientific packages, most notably the machine learning eco system.

That's...somewhat misleading. Python was well known for its scientific stack as well as server-side web development and scripting from the early 2000s (or earlier; NumPy, under its original name of Numeric, was released in 1996, BioPython in 2000, matplotlib on 2003, etc.).

In the 2010s, it became known for its machine learning stack, which was built on top of the existing, already solidly established, scientific stack.

nonameiguess · 4 years ago
Python makes it pretty trivial to load compiled modules as third-party packages, and given the core language itself is already implemented in a similar way, at least for CPython, creating numerical packages as thin wrappers around pre-existing BLAS implementations was probably easier in Python than in Lisp.

It might seem stupid, but operator overloading and metaprogramming features make it fairly simple to emulate the syntax of other languages scientific users would have already been familiar with. Specifically, NumPy, SciPy, and matplotlib quite obviously tried to look almost exactly like MATLAB, and later pandas very closely emulated R. It's a lot easier to target users coming out of university programs in statistics and applied math who have been using R and MATLAB and teach them equivalent Python libraries. Trying to teach people who aren't primarily programmers to use Lisp is going to have a much steeper learning curve.

It really didn't explode in the 2010s, either. You're thinking of Facebook with pytorch and Google with TensorFlow making it dominant in deep learning, but the core scientific computing stack goes back way further than that. As for why Google and Facebook chose Python rather than Lisp, I think it was just already one of their officially supported languages they allowed internal product teams to use. Lisp was not. Maybe that's a mistake, maybe it isn't, but it's a decision both companies made before they even got into deep learning.

kragen · 4 years ago
Lush's heyday was the 01990s, and it's pretty much only useful for "scientific computing", which has a big overlap with "machine learning". Also, it's a Lisp. Python has more readable syntax and was used for all sorts of things even before Numpy existed.

Deleted Comment

tdsamardzhiev · 4 years ago
I believe Python won because of its popularity in academia.
pvitz · 4 years ago
I am using J on and off, but I am not aware of a ML package for it. Are you writing all algorithms yourself or could you recommend a package?
eggy · 4 years ago
I have a bunch of links to ML material for either APL or J. I don't know of any particular library for J. J is interpreted, so it is not as fast as other implementations. I am mainly using it to experiment on concepts and teach myself more ML in J because of the iterative nature of the REPL, and the succinct code. I can keep what's going on in my head, and glance at less than 100 lines, usually 15 lines, of code to refresh it.

There is a series of videos of learning neural networks in APL cited by others here on this thread.

Pandas author, Wes McKinney, cited J as an influence in his work on Pandas.

Extreme Learning Machine in J (code and PDF are here too):

https://github.com/peportier/jelm

Convolutional neural networks in APL (PDF and video on page):

https://dl.acm.org/doi/10.1145/3315454.3329960

A DSL to implement MENACE (Matchbox Educable Noughts And Crosses Engine) in APL (Noughts and Crosses or Tic-tac-toe):

https://romilly.github.io/o-x-o/an-introduction.html

liveranga · 4 years ago
There's a handful of resources around that look fun though I haven't dug into them yet.

There's this on the J wiki: https://code.jsoftware.com/wiki/User:Brian_Schott/code/feedf...

And there's this YouTube series for APL: https://youtube.com/playlist?list=PLgTqamKi1MS3p-O0QAgjv5vt4...

ruste · 4 years ago
I'd also like to know. I'm considering writing some for fun in GNU APL right now.
eigenhombre · 4 years ago
I write Clojure for food, and Common Lisp for fun. One reason for the latter is CL's speed -- awhile back I compared a bit of (non-optimized) Clojure code I wrote for a blog post with a rough equivalent in CL, and was stunned that the Common Lisp code ran about 10x faster. This made me curious as to how fast it could be made if I really tried, and was able to get nearly 30x more[1] by optimizing it.

Clojure is definitely fast enough for everything I've done professionally for six years. But Common Lisp, while having plenty of rough edges, intrigues on the basis of performance alone. (This is on SBCL -- I have yet to play with a commercial implementation.)

[1] http://johnj.com/from-elegance-to-speed.html

lmilcin · 4 years ago
I have implemented real algotrading framework in Common Lisp (SBCL) that connected directly to Warsaw Stock Exchange.

The application was receiving binary messages from the exchange over multicast, rebuilding state of the market, running various (simple) algorithms and responding with orders within 5 microseconds of the original message, at up to 10k messages per second.

With SBCL you can write a DSL and have ability to fully control the resulting instructions (through vops). It is just the question how dedicated you are to writing macros upon macros upon macros.

I used this to the fullest extent and the result was as good as any hand optimized C/assembly.

For example, the binary message parser would receive a stack of complicated specifications for the messages in the form of XML files (see here if you are curious: https://www.gpw.pl/pub/GPW/files/WSE_CDE_XDP_Message_Specifi...), converted XML to DSL and then, through magic of macros, the DSL was converted to accessors that allowed optimal access to the fields. Optimal here mans the assembly could not be improved upon any further.

Large parts of the application (especially any communication and device control as it was done with full kernel bypass) was written in ANSI C but the Foreign Function Interface makes integrating it into the rest of application a breeze.

I write all of this, because I frequently meet complete disbelief from people that Lisp can be used in production.

I personally think it is exactly the opposite. Lisps offer fantastic development environment. The problem are developers who are mostly unable to use Lisp for work effectively, I guess due to too much power, freedom and complete lack of direction on how to structure your application.

pjmlp · 4 years ago
It is similar to how many still don't belief in GC enabled systems languages.

If some big shot corporation with weight in the industry doesn't shove it down the throat of devs, then it isn't possible.

And regarding Lisp, most tutorials keep ignoring that arrays, structures, stack allocation, deterministic allocation,... are also part of Common Lisp.

Galanwe · 4 years ago
You sure of the 5us figure?

Working in the field, that seems overconfidently good, considering its faster than most wire to wire SLA of world class FPGAs doing market data translation.

jamtho · 4 years ago
This is fascinating. Did you come across any good resources to help learn how to optimise aggressively on SBCL, as you're describing?
scrubs · 4 years ago
Hmmm cl for composition of software components, most software, and anything needing low level access in C. I could drink that coolaide.
_bsless · 4 years ago
Others have made similar comments on comparing apples to oranges when comparing optimized CL to idiomatic Clojure, but what they didn't address was "idiomatic Clojure". Threading a sequence through a bunch of lazy operations is good for development time, but at this point I wouldn't consider it idiomatic for "production" code.

Two things in the implementation are performance killers: Laziness and boxed maths.

- baseline: Taking your original implementation and running it on a sequence or 1e6 elements I generated, I start off at 1.2 seconds.

- Naive transducer: Needs a transducer of a sliding window which doesn't exist in the core yet[0], 470ms

- Throw away function composition, use peek and nth to access the vector: 54ms

- Map & filter -> keep: 49ms

- Vectors -> arrays: 29ms

I'd argue only the last step makes the code slightly less idiomatic. Might even say that aggressively using juxt, partial and apply is less idiomatic than simple lambdas

You can see the implementation here

[0] https://gist.github.com/nornagon/03b85fbc22b3613087f6

[1] https://gist.github.com/bsless/0d9863424a00abbd325327cff1ea0...

Edit: formatting

NoahTheDuke · 4 years ago
Your post is a lot of fun! I have a fondness for these kinds of deep dives. That being said, I feel like comparing the initial unoptimized Clojure code to highly optimized Common Lisp is kind of unfair. I wonder how fast the Clojure code could run if given the same care. Maybe I'll give that a try tonight!
eigenhombre · 4 years ago
That would be great! I agree it's not a fair comparison -- the post was meant more along the lines of, "how far can I push Common Lisp and learn some things?" rather than a strict comparison of performance of optimized code in each language. As I said, Clojure is fast enough for my purposes nearly all the time.
ludston · 4 years ago
You're going to be severely constrained by the fact that idiomatic clojure code uses immutable data structures which cannot theoretically be as fast as mutable ones in sbcl. Even with transience, the Clojure hash map trie is an order of magnitude slower than sbcl's hash table.
gleenn · 4 years ago
I also write Clojure for food, but also for fun. I found that a lot of Clojure is fast enough but also speeding it up is relatively easy with some type hinting and perhaps dipping into mutable data structures or other tricks.

Relevant Stackoverflow which contains a lot of simple suggestions to hopefully shore up the deficit compared to SBCL: https://stackoverflow.com/questions/14115980/clojure-perform...

didibus · 4 years ago
I'm sorry to say, but I'm not a fan of articles like this, you're not using the same data-structures and algorithms, so what's the point?

To compare language compiler and runtime performance you should at least use similar data-structures and algorithms.

> but, you can generate very fast (i.e., more energy efficient)

I'm actually not sure this is true, I was surprised the other day to find a study on this and to find that the JVM was one of the most energy efficient runtime.

This was the link: https://thenewstack.io/which-programming-languages-use-the-l...

And what you can see is that execution time doesn't always mean more energy efficient. For example you can look at Go and see that it beats a lot of things in execution time, but loses to those in energy efficiency. Like how Go was faster than Lisp yet less energy efficient than Lisp.

hajile · 4 years ago
Quite a few lisp implementations allow you to drop into inline assembly. That makes them theoretically per close to maximum efficiency.
vindarel · 4 years ago
It would be cool if you updated the link to the Cookbook in your article from the old one (on Sourceforge) to the newer one (on Github): https://lispcookbook.github.io/cl-cookbook/ Best,
eigenhombre · 4 years ago
I will, thank you!
earthscienceman · 4 years ago
Your website inspires me, I've been on a similar career trajectory to you in some ways although at the opposite end of the earth. Starting in Physics and moving away in time, although I'm only 33 now. I love that you're putting your art on display. It's very rare to find scientists pursuing art in such a direct way.
kaba0 · 4 years ago
I would be interested in at least the performance of type hinted clojure code.
fulafel · 4 years ago
The closest translation of the code translation, having already dropped the laziness of the Clojure version, was 4x faster. The rest of the speedups came from rewriting the code!
orthecreedence · 4 years ago
I miss my common lisp days, and I think rust being able to export C ABIs makes it a really great companion language for CL. I also think common lisp (or really, any fast lisp) is a really great tool for game development. The fact that you can redefine a program as it's running really helps iterate without having to set up your state over and over. Pair that with a fast, low-level language that can handle a lot of the lower level stuff (been trying to find time to look at Bevy on rust), and you have a killer combo.

The main reason I stopped using lisp was because of the community. There were some amazing people that helped out with a lot of the stuff I was building, but not a critical mass of them and things just kind of stagnated. Then it seemed like for every project I poured my soul into, someone would write a one-off copy of it "just because" instead of contributing. It's definitely a culture of lone-wolf programmers.

CL is a decent language with some really good implementations, and everywhere I go I miss the macros. I definitely want to pick it up again sometime, but probably will just let the async stuff I used to work on rest in peace.

sahil-kang · 4 years ago
I can understand the failure to build a critical mass of contributors, but can you share some examples of where your work was duplicated instead of built upon?

I’ve read through some of your async work in the past and from an initial glance, it seemed like you had the right idea by wrapping existing event libs and exposing familiar event loop idioms. At the very least, it seemed uncontroversial so I’m interested to see why others would choose not to build upon it.

orthecreedence · 4 years ago
> can you share some examples of where your work was duplicated instead of built upon?

Wookie (http://wookie.lyonbros.com/) was the main one, or at least the most obnoxious to me. I was trying to create a general-purpose HTTP application server on top of cl-async. Without naming any specific projects, it was duplicated because it (and/or cl-async) wasn't fast enough.

> At the very least, it seemed uncontroversial so I’m interested to see why others would choose not to build upon it.

A superficial need for raw performance seemed to be the biggest reason. The thing is, I wasn't opposed at all to discussions and contributions regarding performance. I was all about it.

Oh well.

a-dub · 4 years ago
orthecreedence · 4 years ago
I was not! Thanks for the tip.
temporallobe · 4 years ago
I regularly work with Clojure.

This is probably an unpopular opinion in this thread, but despite having worked with it for years, I still don’t much like it, mostly because it’s far too terse and the syntax is so far removed from that of C-based languages. The other day I wrote a Java-based DTO and it was refreshing how clear and verbose everything was, despite the almost comical volume of code it took in comparison to what similar functionality would look look like in Clojure. Plus, the state of tooling in Clojure is not the best.

I would also add that while you might initially do well with something like Clojure, it may be difficult to maintain, especially if you plan to make it a legacy product with O&M support in the future.

taeric · 4 years ago
I'm having trouble thinking how the dto would look bad in a lisp. Should just be close to (defstruct field_name next_field...). What makes that so much worse than what it would be in Java?

The difficult to maintain line really rubs me the wrong way. I've seen messes in all languages I've worked with and see no reason to think a lisp would be worse. Just conjecture from all around.

bcrosby95 · 4 years ago
Funny, I regularly work in Java. And I'm tired of writing stupid bags of data.
agumonkey · 4 years ago
I agree with both of you :)
Guthur · 4 years ago
Well this article is about Common Lisp :).

You could easily use CLOS or Structs in Common Lisp to do DTO. If you were using CLOS for DTOs you'd also have generic functions to dispatch on said DTOs.

Deleted Comment

jhhh · 4 years ago
The hard to maintain issue seems to follow most dynamic languages around. What kind of those issues have you run in to in your time working with Clojure professionally?
Guthur · 4 years ago
To be honest; "hard to maintain", "technical debt", "total rewrite required", "monolith to micro service", probably "micro service to monolith" at some point, all of these appear no matter what the language, I really don't think static analysis is the problem. Though I will give you that pretty much every popular dynamic language has terrible runtime environments, which doesn't help at all. If I don't have comprehensive static analysis I want good introspection and reflection, I'd ideally want static manipulation and continuation.
closeparen · 4 years ago
In maintaining verbose codebases I often find that certain refactors or paths to improvement are closed off, because they just wouldn't be worth that much typing. It may make it easier to understand why something is going wrong, and that's usually the bulk of the intellectual work, but it also means the constructs around which the program was initially designed will be its constructs forever.

Deleted Comment

lkey · 4 years ago
As with all of these forays, I applaud the author for learning new things, but these benchmarks are primarily testing reading from stdin and format printing to stdout, followed by testing bignum impls as mentioned elsewhere. For this reason, the title and benchmarks are a bit misleading.
xxs · 4 years ago
...also testing half-compiled code. JIT compiled tests that run few seconds just discredit their authors. It's yet another example of how not to perform microbenchmarks.
twicetwice · 4 years ago
Worth pointing out for those who don't know that the Java runtime does JIT optimizations of bytecode based on observed runtime patterns. Blew my mind when engineers were talking about "warming up the code paths" during startup of a large Java service and I found out it wasn't joke/superstition.
brabel · 4 years ago
It doesn't discredit anything. Sometimes what you care about is performance from cold start. Serverless And CLIs for example. JIT aint gonna help you there , and this problem falls into that category.

EDIT: did you read the article? It is not a microbenchmark.

Decabytes · 4 years ago
Is it weird that I don’t like Common Lisp at all but I like Scheme a lot? I just never liked Lisp 2s and separate name spaces for functions and variables. But really that is the biggest issue for me. I’m sure if only Common Lisp existed it wouldn’t bother me at all.

That being said, I think CL is a fantastic language and there is a lot more libraries out there to do useful things than in scheme. My C programming is really weak so I find it challenging whenever I come across a library in c that isn’t already wrapped

_syeu · 4 years ago
I'm a bit the same. I've been writing Racket for a number of years now and looking back at Lisp I see a lot of ugliness that I don't really think I enjoy.

Racket has a nice package manager and module system that kind of works for me, and the documentation is honestly some of the best I've ever used, if not my favorite. Comparatively, I've tried using GNU Guile and found the online documentation to be horrendous, and trying to find online documentation for what's considered to be the "standard library" in Common Lisp still confuses me.

I love seeing people use CL and other Lisp-likes in the wild, and Norvig was a big inspiration for me.

bollu · 4 years ago
What IDE do you use for racket? In emacs, I've found SLIME and its associated debugger to be more powerful than GEISER. I never could come to like Dr. Racket due to its lack of autocomplete and things like parinfer / paredit.
slyrus · 4 years ago
I liked the way coalton built a statically typed lisp-1 in CL.

Undoubtedly there are some issues I haven't thought through, and I'm too lazy to actually try to implement it, but I've always thought one should be able to make "CL1" (or some such) that's basically common lisp but with a single namespace for functions and variables.

bitwize · 4 years ago
Not really. I'm a diehard Schemer as well, for the same reasons: the Lisp-1 nature helps you treat procedures as true first-class values and fits in with Scheme's more functional style. Something you can do in CL also, just with a bit more ceremony. And CL programmers are more "haha, setf go brrrr" anyway.

That said, I'd rather use CL by far than any other language except Scheme, and there are cases where CL is the right thing and Scheme is not. The most brilliant, beautiful, joy to maintain enterprise system I've ever seen was written in CL.

randomswede · 4 years ago
I used to think it was irritating, until the very moment I tried naming an input parameter in Scheme "list" (and also construct lists using "list" in the same function).

That was the moment I started my path to liking a separate variable and function namespace.

I also occasionally get bitten by this in Python, isn't "file" a perfect variable name for holding a handle to an opened file?

aidenn0 · 4 years ago
Not at all weird. I'm the opposite; I never liked Lisp 1s, and whenever I use one I always end up shadowing function or macro bindings by accident.
fiddlerwoaroof · 4 years ago
I’m exactly the same way: also, Lisp-1 code is frequently full of weird abbreviated variable names like “lst” when in CL I can just write “list” without worrying about clobbering an important function.
tyingq · 4 years ago
Not the same phone encoding challenge, but an interesting feature of the bsd globbing built into bash. The map is the typical map of letters on a touch-tone phone..where "2" can be a, b, or c.

All letter combos for the number "6397":

  #!/usr/bin/bash
  m[0]=0;m[1]=1;m[2]="{a,b,c}";m[3]="{d,e,f}";m[4]="{g,h,i}";m[5]="{j,k,l}";
  m[6]="{m,n,o}";m[7]="{p,q,r,s}";m[8]="{t,u,v}";m[9]="{w,x,y,z}"
  var=$(echo ${m[6]}${m[3]}${m[9]}${m[7]})
  eval echo $var

kubb · 4 years ago
I found Common Lisp to be surprisingly ahead of its time in many regards (debugging, repl, compilation and execution speed, metaprogramming), but unfortunately it doesn't have a large community, and it's showing its age (no standard package management, threading not built into the language). It's also dynamically typed which disqualifies it for large collaborative projects.
gibsonf1 · 4 years ago
It has great package management with https://www.quicklisp.org/beta/ and some truly great and high quality libraries, especially Fukamachi's suite of web libraries and so many others. Woo, for example, is the fastest web server. https://github.com/fukamachi/woo (Faster then the runner up Go by quite a bit)

For parallel computing, we use: https://lparallel.org/ Its been great at handling massive loads accross all processors elegantly. And then for locking against overwrites on highly parallel database transactions we use mutex locks that are built into the http://sbcl.org/ compiler with very handy macros.

Mikeb85 · 4 years ago
Slightly off-topic but I'm in awe of Fukamachi's repos. That one person has built so much incredible stuff is amazing to me. Not sure it's enough to get me using CL all the time, but it's very impressive.
xedrac · 4 years ago
I don't really consider quicklisp to be "great package management" since you have to download it, install it, then load it. And don't forget to modify your sbcl init script to load it automatically for you. It felt quite cumbersome to get started using it, even though it was simple enough after that. Rust has truly great package management in my opinion. I run one command to install Rust and I immediately have access to all crates in crates.io.

EDIT: It's kind of ironic for me to make this claim since I use Emacs as my editor...

formerly_proven · 4 years ago
> It's also dynamically typed which disqualifies it for large collaborative projects.

I've been around the block for long enough to see how far the pendulum swings on this one. I'm guessing that it starts going the other way soon.

kubb · 4 years ago
In my opinion after years in the industry, the benefits of type safety are too compelling and well known, to the point that I don't even feel like debating it. It's not a fad that will change periodically.
ballpark · 4 years ago
I'm not following, could you elaborate?
medo-bear · 4 years ago
common lisp supports type annotations. there is even an ML-type language impletmented in it (see coalton). quick lisp [1] is used for package managment, bordeaux-threads [2] for threading.

1. https://www.quicklisp.org/index.html

2. https://github.com/sionescu/bordeaux-threads

jtmoulia · 4 years ago
Common lisp has a smaller community than the currently most popular languages, but I'm consistently impressed by the range of and quality of libraries the community has created (despite all the "beta" disclaimers) [1]

Regarding type-checking, common lisp is expressive enough to support an ML dialect (see coalton), and is easily extended across paradigms [2]

1. https://project-awesome.org/CodyReichert/awesome-cl

2. https://coalton-lang.github.io/

AlchemistCamp · 4 years ago
> It's also dynamically typed which disqualifies it for large collaborative projects.

Like Github or WordPress?

jcelerier · 4 years ago
.. are those considered "good" ? github is meh at best considering it's 13yo and the billions of dollars poured into it, and wordpress, I don't think anyone can reasonably say that it's a sane software. They are both good arguments against dynamic typing imho (especially the latter).
SZJX · 4 years ago
> It's also dynamically typed which disqualifies it for large collaborative projects.

That’s a quite absolute statement. At least Erlang/Elixir users would tend to disagree. “Dynamically typed” can still represent a huge variety of approaches, and doesn’t have to always look like writing vanilla JavaScript for example.

kubb · 4 years ago
I hear your argument as "there exist dynamically typed languages therefore the benefits of typing don't matter". To be positive I'll say that it's better than the pendulum argument.

I'm aware that there exist dynamically typed languages in which large projects are written, I'm saying that they would be better off with type safety.

cracauer · 4 years ago
> It's also dynamically typed which disqualifies it for large collaborative projects.

You can add type declarations and a good compiler will check against them at compile time: https://medium.com/@MartinCracauer/static-type-checking-in-t...

randomswede · 4 years ago
It is non-optionally strongly typed.

It is optionally as statically typed as you want, depending on what compiler you use, I am mostly familiar with SBCL, which does a fair bit of type inference and will tell you at length where your lack of static typic means it will produce slower code.