I remember reading the Perl O'Reilly book (Introduction to Perl[0]) end-to-end and basically feeling that it all makes sense - ($)calar, (@)rray and % for dictionary (because we have a pair of "o"s representing key/value) and so on. Coming from only writing bash (which is all people around me wrote), it was like a becoming a superhero overnight. I rewrote all my bash scripts in Perl and got high level language features with blazing speed. I relished taking people's bash scripts that took an hour and rewriting them in Perl for them to take barely few minutes (which was objectively terrible performance but I was a novice programmer then and nobody else knew better). I was a hotshot. It was an awesome feeling.
Later I got another role solely because they couldn't find Perl programmers and I wasn't half bad at it. But this was an actual application written by a bunch of people who wanted to write clever code and it was like handing over the keys to a missile depot to a bunch of arsonists. Many thousands LoC. By the end of it, I was told that we need to move to using Java and I could barely contain my relief.
For one-off scripts, still nothing flows like Perl. It is the most interesting language I have coded in, bar none.
[0] correction: Learning Perl, the llama book (thanks @ninkendo)
I was a mediocre developer and student in my CS program and actually considered getting out at a few points. I really loved systems and building solutions though, and I ended up becoming a DBA.
For some reason my mental model resonated with Perl. I was able to use it almost like a writing process, getting my “outline” laid out in Perl and refactoring supplementing with more efficient C code or third party stuff later.
It was cool, i started fixing data integration issues and automating processes around the databases. Eventually a colleague and I basically built an application that made our DR testing failover and failback processes a two-click event. I left that company long ago and I know a bunch of our stuff ran almost 20 years before the system was migrated to AWS.
IT is more industrial and efficient these days. That’s not a bad thing, but I had alot of fun being the kid showing the old people what Linux was and gluing all of these systems to orchestrate them. Unfortunately Perl is an artifact of that era.
I felt the same when I used Python to rewrite some broken/incompatible C code. It didn't save on performance, but it did reignite that hacker mindset with a clear to write language. I show students my Flask sets up and I can see the light bulbs firing off in their brains.
The older languages may be artifacts of our era of code, but I'm excited to see what the next wave of documented prompting vibe coding will bring.
I might finally understand what the heck my students' code is doing HA!
There's so much I can write here, not only do I agree with you - there remains little better for processing streams of text than perl.
Even PHPs prominence (and ease of use) can be traced back to the parts it borrows from Perl; since the web is just manipulating text after all.
I went on a similar journey to you. Started with C, moved to bash, then to perl to replace bash scripts, and you're right that it feels like a super power.
But the most interesting language I've ever coded in was probably Ruby, because it changed the way I look at languages. In Ruby, everything is an object and everything is mutable - this makes the dynamic metascripting possibilities simply absurd.
I don't advocate for these languages anymore, as Perl is easy to become "write once, read never", in a way that's much worse than bash- but I can't help but feel like we have definitely lost a killer language for text processing. Seemingly nobody writes perl anymore.
> there remains little better for processing streams of text than perl.
That's true, but obscures what IMHO is a deeper truth: thinking in perl naturally leads to thinking in "streams of text", which is a kind of general composability that's been largely forgotten in the Unix world.
These days every project has it's own giant list of dependencies, it's own conventions about code structure, it's own list of para-toolchain utilities like linters and formatters. Often it even has it's own set of vscode extensions you pretty much have to use.
There was something so wonderfully down-to-earth and humane about those O'Reilly books. But actually most IT books had something casually playful and creative about them. A quality you rarely find these days.
The camel book was “programming Perl” and was more geared towards existing programmers IIRC… there was also the llama book, “learning Perl”, which was maybe what GP was referring to?
Learning Perl (the llama book) was my first programming book, and it taught me programming in general, not just Perl, and I still think it was an amazing book. Very approachable, helpful to beginners, I read it cover to cover. There’s also “learning Perl objects references and modules”, which is a bit of a sequel to Learning Perl. Those two books helped me land my first gig as a Perl programmer, and started my whole career.
Ah, back in university I wrote a paper comparing Perl, Python and Tcl/Tk.
Tcl was by far the easiest to use, while Perl sat on the other end of the spectrum for both skill needed to write it as well as to just be able to read it.
Modern python is far more complicated than anything one saw in a perl 5 script back into the day. Type hints, decorators everywhere, big frameworks like numpy that force your problem into particular paradigms. The modern focus on async and iterables has had the effect of turning even simple code "inside out". It's complicated!
And what's interesting is that it's complicated in a lot of similar ways, in terms of thought space, as perl was. Perl was great because you could be clever, where you really couldn't in more pedestrian languages.
I find (and I'm not sure this is a good thing) that my python output these days feels very clever sometimes...
I took great pride in making readable, maintainable perl.
I worked at a VFX place that was held together by loads of perl, written over a good 15 years. Some of it was clever, but most of it was plain readable scripts.
The key to keeping it readable was decent code reviews and someone ripping the piss out of you for making unreadable soup.
Its all python nowadays, I do miss CPAN, but I don't miss perls halfarsed function args.
However for the longest time, the documentation for perl was >> than python. At the time python doc were written almost exclusively for people who knew how to python. Perl docs assumed you were in a hurry and needed an answer now, and if you were still reading by the end assumed you either cared or were lost and needed more info.
With the rise of datascience, python has lost its "oh you should be able to just guess, look how _logical_ the syntax is" to "do it like this."
> perlcritic is a Perl source code analyzer. It is the executable front-end to the Perl::Critic engine, which attempts to identify awkward, hard to read, error-prone, or unconventional constructs in your code. Most of the rules are based on Damian Conway's book Perl Best Practices. However, perlcritic is not limited to enforcing PBP, and it will even support rules that contradict Conway. All rules can easily be configured or disabled to your liking.
It helped me a lot. I think every Perl developers should use it, might help to avoid headache later on. Be careful with severity level "brutal" and "cruel" and "harsh", however. I think "gentle" works in many cases. That said, I used "brutal" and I only fixed the legitimate issues. "brutal" helped me write a proper POD, for one, as "gentle" does not complain about that.
> I took great pride in making readable, maintainable perl.
In the past when I used perl, I did the same thing.
But I came to learn one thing about perl - its good point is its bad point.
When I used it, perl was the highest level language I ever used. It was expressive, meaning I could take an idea in my head, and implement it in perl with the least friction of any language.
When I worked with other people's perl, I found they were mindful and cared about what they were doing.
But the way they thought was sometimes almost alien to me, so the expression of their thinking was a completely different type of perl, and it was lots less readable to me. And frequently the philosophy of what they wrote was backwards or inside out from what I would do.
Now I have replaced perl with python day to day and although implementation of code seems a few steps removed from my thinking, it seems that other people's code is more easily read and understood. (this is just my opinion)
As they say, with perl there's more than one way to do it (TMTOWTDI) and with python there is only one way.
Both approaches have their merits. Like with maven, where I once saw a question on a forum that was like "how do I do X?" and the reply was basically "You can't, don't try to do so, as that's wrong".
It just grinds my gears that _I_ need to check to see if the caller has given me all the required bits. That seems like something the language should do.
I understand that it does give you a lot of flexibility, but Hnnnnnnnnn
(from what I recall object oriented perl doesn't give you this flexibility, but I'm not sure, as I never really did it. )
Perl’s function arguments no longer require you to shift or to access the @_ array. There are even proper argument signatures now. There’s also continuing improvements in putting a one true way to do objects into the core language, so you don’t have to bless a hash, use Moose, or Moo, or use Object::InsideOut (or any of a dozen other non-core modules).
One thing I wish other languages had was Perl's taint mode: Once enabled, input coming from the outside was "tainted", along with anything you explicitly marked as tainted. If a tainted variable was used to populate another tainted variable (such as by concatenation), the result itself was tainted. If a tainted variable was used in certain ways (such as with the `open` call), the program crashed. The primary way to remove a taint was by running the variable through a regular expression, and using the captured matches (which would not be tainted).
This is "parse, don't validate" as a language feature. Any statically typed language has this, in the sense that you can write your domain logic in terms of a set of "untainted" domain types, and only provide safe conversion functions (parsers) from user input to domain types.
No, they really don’t have this, because for example you can still open() using an arbitrary string as a file name, a string which may have come from unvalidated input. They don’t force you to convert the string to a FileName type and also prove that you have done some sort of pattern-matching on the string.
Ruby does. Normalization of untrusted input isn't taught or discussed enough. Or each platform's regex security.
Honestly, I think all CS/EE programs should require an OWASP course and that coding should require regular continuing education that includes defensive coding practices for correctness, defined behavior, and security.
Suppose the Table family type their son Bobby's name into a form. The Perl program now has a "tainted" string in memory - "Robert'; DROP TABLE Students --".
The Perl code passes this string through a regex that checks the name is valid. Names can include apostrophes (Miles O'Brien) and hyphens (Jean-Luc Picard) along with spaces and normal ASCII letters, so the regex passes and the string is now untainted.
> The Perl code passes this string through a regex that checks the name is valid
I think "parse don't validate" doesn't help in this example, but naively the regex would not check whether a name is valid but "extract all parts of the string that are provenly safe".
Which is not reasonable for SQL statements, so someone invented prepared statements.
I think the idea is that the Regex parsing forces the programmer to think about what they're doing with the string and what the requirements for the non-tainted variable are.
For example, a file name string would not allow unescaped directory separators, dots, line breaks, null bytes (I probably got most details wrong here...) and the regex could remove these or extract the substring until the first forbidden character.
Sure, this cannot prevent mistakes.
But the idea, I think, is not to have a variable "safeUserName", instead a "safeDbStatement" one.
You should be using DBI or something that builds on DBI to use prepared statements for database interactions. That’s why it’s called the DataBase Interface.
Nice idea, thank you! I think it should be possible to make a Python object behave in a similar way (crashing when converted to string / ...), need to see if I can make it work.
In PHP, you can construct objects directly from $_GET/POST (and erase everything from these vars to make sure they are not used directly), then lean on data types to make sure that these values are not used in a wrong place.
> no standard way of setting up an environment for MANY years hurt
Serious question: is that solved? I still see a forest of options, some of which depend on each other, and at last count my laptop has 38 python binaries. What's the standard way?
Creating an environment, given that the Python binary it will use is already installed, is trivial (standard library functionality since late 2012). So is choosing which environment to use. So is installing pre-built packages, and even legacy source packages are pretty easy (but slow, and installation runs arbitrary code which is entirely needless for these) when they only contain pure Python code. Even dependency resolution is usually not too bad.
The big problems are things like
* building multi-language packages from source locally, because this is expected to set up temporary local build environments (and build tools have to avoid recursion there)
* using external non-Python dependencies (essentially unsolved, and everyone works around this by either vendoring stuff or by not declaring the dependency and failing at runtime) — see https://pypackaging-native.github.io/ for an overview of the problems and https://peps.python.org/pep-0725/ for what they're trying to standardize to deal with it
* dealing with metadata for source packages; in the really general case you have to build the source to get this (although the package-building API now provides a hook so that build backends can specifically prepare metadata). This is mainly because some packages have dependencies that depend on very particular platform details that (apparently) can't be expressed with the "environment marker" scheme in standard metadata (https://peps.python.org/pep-0508/#environment-markers)
* and, of course, figuring out which packages need to be in your environment (Python won't decide for you what your direct dependencies are) and managing that environment over time. The reason all these other tools popped up is because Pip only installs the packages and offers very basic environment inspection; it's only now starting to do anything with lockfiles, for example, now that there is finally a standard for them (https://peps.python.org/pep-0751/).
But if you mean, is there a standard toolchain that does everything and will be officially blessed by the core language developers, then no, you should not ever expect this. There is no agreement on what "everything" entails, and Python users (a large fraction of which don't fit the traditional image of a "developer" at all) have widely varying workflows and philosophical/aesthetic preferences about that. Besides which, the core language team doesn't generally work on or care about the problem; they care about the interpreter first and foremost. Packaging is an arms-length consideration. Good news, though: the Python Packaging Authority (not at all authoritative, and named with tongue firmly in cheek, but a lot of people didn't get that) is stepping up and working on official governance (see https://peps.python.org/pep-0772/).
> at last count my laptop has 38 python binaries
Something has gone very wrong (unless you're on Windows, such that admin rights would be needed to create symlinks and by default `venv` doesn't try). To be clear, I mean with your setup, not with the tooling. You should only need one per distinct version of Python that your various environments use. I'd be happy to try to help if you'd like to shoot me an email (I use that Proton service, with the same username as here) and give more details on how things are currently set up and what you're trying to accomplish that way.
Perl was my first scripting language, I occasionally need to run some of those old scripts (15-20 years old), they always run. Python scripts last 6-12 months.
Articles that poopoo on one language kind of have a dated all/nothing perspective.
Most languages have a decent enough framework or two that the differences between using them for different use cases may be closer than many folks realize vs whatever we hear about as the new hotness through the grapevine.
A mess can be made in a lot of languages, and a long time ago, it was even easier, except some of that code worked and it didn't get touched for a long time.
Leaving aside issues of language design and the emergence of other languages, it's interesting to think about other reasons why Perl lost popularity. Some of you know this history better than I do, but I think that it's now unknown to most HN readers.
The enormous reason that I see is the insistence, from Larry Wall and others, on a bottom-up "community" transition from Perl 5 to Perl 6. The design process for Perl 6 was announced at a Perl conference in 2000 [1]; 15 years later, almost every Perl user was still using Perl 5. The inability of the Perl community to push forward collectively in a timely way should be taken by every other language community as a cautionary tale.
Tim O'Reilly made a secondary point that may also be important. For a long time, Perl books were O'Reilly's biggest sellers. But the authors of those titles didn't act on his suggestion that they write a "Perl for the Web" book (really a Perl-for-CGI book). Books like that eventually came, but the refusal of leading authors to write such a book may have made it easier for PHP to get a foothold.
Perl was the perfect language when the majority of people wanting to use it already knew shell, sed, awk, and C.
It succeeded because it was a beautiful and horrible combination of those tools/languages. If you know those things, Perl is really easy to bang together and generate a webpage or use to automate administrative task.
Given Larry Wall's education, perhaps we shouldn't be surprised that Perl underwent a linguistic evolution from pidgin[1] to creole[2].
I came to Perl a bit late, not being a super adept user of Unix and only having written C in school. I'd put myself in the category of Perl developer that was part of the second "creole language" phase of Perl's development. I learned a ton of Unix and got better at C by learning Perl.
While Perl's mixed nature made it successful, when the world of web development expanded to include more people without the necessary background to benefit from the admixture, it went from asset to hinderance. All that syntax went from instantly familiar to bizarre. A perfect example of this are the file test operators[3].
The Perl 6 struggles definitely added to the difficulties posed by the changing nature of the web dev community. They created enough uncertainty that tons of people asked themselves "Why should I learn Perl 5 when Perl 6 is just around the corner?". That slowed adoption from in the 2001-2010 timeframe while Python and Ruby grew rapidly.
Rakulang is kind of magical. Writing it is like using a language given to us by aliens. I wish it was getting more uptake because it is fun and truly mind expanding to write.
In the end, I think the shifting nature of the community was a larger factor in the decline of Perl than the slow Perl 6 rollout and failure of messaging around it.
I still love writing Perl 5 and I wish I got to do more of it.
Perl 6 definitely sucked up any forward momentum in the community.
It's become a poster child for how not to do a major transition.
KDE3/4, GNOME 2/3, Python 2/3 transitions all benefited from this hindsight (still experiencing a lot of pain themselves).
Raku might be an interesting language (I haven't dug deep), but it's not Perl. Larry et al should've just called it separately from the start and allow others to carry on the Perl torch. They did this too late, when the momentum was already dead.
Perl 5 was a product of its time, but so was Linux, C, Python 2 or PHP3, and they're still very much relevant.
I had lots of experience writing Perl5 before the company switched to Python3.
> The inability of the Perl community to push forward collectively in a timely way should be taken by every other language community as a cautionary tale.
I think this is a good point that I hadn't considered before.
I think Perl stopped being able to attract new users. There is always going to be users leaving. If they aren't replaced, you will slowly shrink.
I think the point you raised is part of why they couldn't attract new users. I also think people asked themselves "why chose perl now, if I know I need to re-write when Perl6 comes?" and decided Perl5 was bad choice. I also think the fact Perl had this reputation for being ugly, difficult, and "write only line noise" kept people from even considering it, even if that reputation didn't match production codebases.
I just had to check if it even existed because I was sure that I had a CGI book that focused on Perl from O'Reilly in the late 90's, and sure enough, the book I had was published in 1996 (with a second edition released in 2000).
Not saying your anecdote is inaccurate, but my perception around that time was that "Learn PHP in 24 Hours" was a lot hotter than O'Reilly's Perl books - so it may have just been luck, marketing, a flashier title, or even just that PHP was better suited for what people wanted to learn and do.
Slapping PHP tags inside an HTML file was a more natural "outside-in" sort of introduction to programming for the web than Perl's approach. If you already knew some HTML and wanted to add a widget to a page, PHP made that dead simple.
Yes, that's what I had in mind. 1996 was still early—but perhaps the delay allowed PHP to get more of a foothold than it would otherwise have had, especially at such an early stage in the web's development.
perl5 -> php3 here. I'd say PHP won for the following reasons: 1. Easier to set up. 2. More homogeneous environment which meant easier deployment across the hosts of the day. 3. More secure (a bit). 4. Unicode actually worked. 5. Far more logical syntax: did not have to man perldsc[0] every time you wanted a data structure of greater depth than a monodimensional hash or array. 6. Language features like scalars which made programming easier. 7. Often less files... it was designed to be embedded inside of HTML instead of generating it.
I made the switch from Perl to php mostly because php executed a lot faster (without having to figure out any weird stuff like modperl) and I loved the little convenience functions just built-in like “strtoupper” and the date formatting stuff!
I think Perl died from a combination of three factors:
1. The rise of other web languages that did what Perl was good at but better.
Perl was probably a fine language for sysadmins doing some text munging. Then it became one of the first languages of the web thanks to Apache and mod_perl, but it was arguably never great at that. Once PHP, Python, Ruby, and (eventually) JavaScript showed up, Perl's significant deficiencies for large maintainable codebases made it very hard to compete.
In many cases, a new, better language can't outcompete an old entrenched one. The old language has an ecosystem and users really don't like rewriting programs, so that gives it a significant competitive advantage.
But during the early rise of the web, there was so much new code being written that that advantage evoporated. During the dot com boom, there were thousands of startups and millions of lines of brand new code being written. In that rare greenfield environment, newer languages had a more even playing field.
2. Perl 6 leaving its users behind.
As a language maintainer, I feel in my bones how intense the desire is to break with the past and Do Things Right This Time. And I'm maintaining a language (Dart) that is relatively new and wart-free compared to Perl. So I can't entirely blame Wall for treating Perl 6 as a blank check to try out every new idea under the sun.
But the problem is that the more changes you make to the language, the farther you pull away from your users and their programs. If they can't stay with you, you both die. They lose an active maintainer for the core tools they rely on. And you lose all of their labor building and maintaining the ecosystem of packages everyone relies on.
A racecar might go a lot faster if it jettisons all the weight of its fuel tank, but it's not going to faster for long.
I think the Perl 6 / Raku folks go so excited to make a new language that they forgot to bring their users with them. They ran ahead and left them behind.
3. A wildly dynamic language.
If you want to evolve a language and keep the ecosystem with you while you do it, then all of that code needs to be constantly migrated to the new language's syntax and semantics. Hand-migrating is nightmarishly costly. Look at Python 3.
It's much more tractable if you can do most of that migration automatically using tools. In order to do that, the tools need to be able to reason in detail about the semantics of a program just using static analysis. You can't rely on dynamic analysis (i.e. running the code and seeing what it does) because it's just not thorough enough to be safe to rely on for large-scale changes to source code.
Obviously, static types help a lot there. Perl 5 not only doesn't have those, but you can't even parse a Perl program without running Perl code. It is a fiendishly hard language to statically analyze and understand the semantics of.
So even if the Perl 6 folks wanted to bring the Perl 5 ecosystem with them, doing so would have been extremely challenging.
I would say this is a case study in a programming language tragedy, but I honestly don't even know if it's a bad thing. It may be that programming languages should have a life cycle that ends in them eventually being replaced entirely by different languages. Perhaps Perl's time had simply come.
I am grateful for all of the innovative work folks have done on Perl 6 and Raku. It's a cornucopia of interesting programming language ideas that other languages will be nibbling on for decades.
Perl did not only have mod_perl. It also had the same kind of frameworks that made Ruby and Python great for web development. It was called Catalyst and was production ready around the same time as RoR and Django.
The real reason why perl failed on this front, IMHO, is that the language makes it super unergonomical to define any nested data structures. In Javascript, Ruby and Python, a list of dictionaries is just some JSON-like syntax: { "x": [...], "y": [...] }
In perl you have to deal with scalars, references, references to scalar, value references, ... and you have the sigils that mean different things depending on what the variable contains. I mean, I spent significant time writing perl and never figured this out.
In a world where you just want a CRUD to load/save a piece of structured data, the ones that let you operate on the data and keep your sanity wins.
> But the problem is that the more changes you make to the language, the farther you pull away from your users and their programs. If they can't stay with you, you both die.
Should people just not make new languages any more?
Should new languages not bear deep resemblances to existing ones?
> Hand-migrating is nightmarishly costly. Look at Python 3.
> It's much more tractable if you can do most of that migration automatically using tools.
There was and is in fact abundant tool support for this in Python. The `2to3` script shipped with Python and the corresponding `lib2to3` was part of the standard library through 3.11. The third-party `six` compatibility libraries are still downloaded from PyPI more often than NumPy.
> In order to do that, the tools need to be able to reason in detail about the semantics of a program just using static analysis.
The experience of these tools users disagrees, from what I've seen. They produce ugly code in places, but it generally passes the tests. Yes, manual fixes are often necessary, but there were source packages published in that era (and it's not like they disappeared from PyPI either) that would detect the Python version in `setup.py`, run the tool if necessary at installation time, and have happy users.
Python's soul is "Pythonic", which stayed pure from Python 2 to Python 3, but Perl's soul was "Pearl-ick", and Perl 6 had a totally different kind of "ick" than Perl 5 did.
Perl's language design fetishizes solving self-inflicted puzzle after puzzle after puzzle.
Summary:
An incident on python-dev today made me appreciate (again) that there's more to language design than puzzle-solving. A ramble on the nature of Pythonicity, culminating in a comparison of language design to user interface design.
Some people seem to think that language design is just like solving a puzzle. Given a set of requirements they systematically search the solution space for a match, and when they find one, they claim to have the perfect language feature, as if they've solved a Sudoku puzzle. For example, today someone claimed to have solved the problem of the multi-statement lambda.
But such solutions often lack "Pythonicity" -- that elusive trait of a good Python feature. It's impossible to express Pythonicity as a hard constraint. Even the Zen of Python doesn't translate into a simple test of Pythonicity.
In the example above, it's easy to find the Achilles heel of the proposed solution: the double colon, while indeed syntactically unambiguous (one of the "puzzle constraints"), is completely arbitrary and doesn't resemble anything else in Python. A double colon occurs in one other place, but there it's part of the slice syntax, where a[::] is simply a degenerate case of the extended slice notation a[start:stop:step] with start, stop and step all omitted. But that's not analogous at all to the proposal's lambda <args>::<suite>. There's also no analogy to the use of :: in other languages -- in C++ (and Perl) it's a scoping operator.
And still that's not why I rejected this proposal. If the double colon is unpythonic, perhaps a solution could be found that uses a single colon and is still backwards compatible (the other big constraint looming big for Pythonic Puzzle solvers). I actually have one in mind: if there's text after the colon, it's a backwards-compatible expression lambda; if there's a newline, it's a multi-line lambda; the rest of the proposal can remain unchanged. Presto, QED, voila, etcetera.
But I'm rejecting that too, because in the end (and this is where I admit to unintentionally misleading the submitter) I find any solution unacceptable that embeds an indentation-based block in the middle of an expression. Since I find alternative syntax for statement grouping (e.g. braces or begin/end keywords) equally unacceptable, this pretty much makes a multi-line lambda an unsolvable puzzle.
And I like it that way! In a sense, the reason I went to considerable length describing the problems of embedding an indented block in an expression (thereby accidentally laying the bait) was that I wanted to convey the sense that the problem was unsolvable. I should have known my geek audience better and expected someone to solve it. :-)
The unspoken, right brain constraint here is that the complexity introduced by a solution to a design problem must be somehow proportional to the problem's importance. In my mind, the inability of lambda to contain a print statement or a while-loop etc. is only a minor flaw; after all instead of a lambda you can just use a named function nested in the current scope.
But the complexity of any proposed solution for this puzzle is immense, to me: it requires the parser (or more precisely, the lexer) to be able to switch back and forth between indent-sensitive and indent-insensitive modes, keeping a stack of previous modes and indentation level. Technically that can all be solved (there's already a stack of indentation levels that could be generalized). But none of that takes away my gut feeling that it is all an elaborate Rube Goldberg contraption.
Mathematicians don't mind these -- a proof is a proof is a proof, no matter whether it contains 2 or 2000 steps, or requires an infinite-dimensional space to prove something about integers. Sometimes, the software equivalent is acceptable as well, based on the theory that the end justifies the means. Some of Google's amazing accomplishments have this nature inside, even though we do our very best to make it appear simple.
And there's the rub: there's no way to make a Rube Goldberg language feature appear simple. Features of a programming language, whether syntactic or semantic, are all part of the language's user interface. And a user interface can handle only so much complexity or it becomes unusable. This is also the reason why Python will never have continuations, and even why I'm uninterested in optimizing tail recursion. But that's for another installment.
Technically I did my first programming in the 80s as a kid. I went to college in the 1990s. I definitely learned Perl and used it.
However I would say an awful lot of the professionals I was around already thought Perl had a bad smell even in the 1990s. It was definitely looked down up on in academia by then. Maybe not in an IT department or Math department but in the CS department it was. It was used by IT guys, and QA guys, and somebody gluing some tools together. An awful lot of people thought it was unacceptable for it to be in serious production code or anything that had to be long term maintainable or be worked on by a team of people larger than size = 1. Your perception of it definitely came from where you were at the time and how you encountered it. If you were on a team producing software for sale that involved a bunch of people and you had version control and QA and everything Perl was already not your thing.
I worked in Perl for ~2.5 years in the mid-2000s. It wasn't the language for me, but I liked, respected, and am still friends with colleagues who loved it. However, I was always dumbfounded by the experience that none of them could or even professed to be sure of what most code fragments did at a glance, even fragments that only used constructs in the base language. When they worked on existing code, they'd run it, tweak it, run it again, maybe isolate some into a small file to see what it did, look at perldoc, ask questions on IRC, etc. As a Lisp guy, I'm all for interactive & iterative development, but I also like learning tools so as to use them more effectively over time. I didn't find that learning Perl made me more productive with it over time (on a modestly sized, pre-existing code base that needed both maintenance and new feature development), and the Perl lovers I knew didn't seem to mind not having this as part of their work.
Anyhow, toward the end of my time there, I had to interview candidates. Because I came to believe that the above is how one had to work with Perl, I took to asking the ones who said they knew Perl, "Can the reverse builtin reverse a list?" (I won't spoil the answer for you.) Most would answer with "Yes" or "No"; 75% of them were mistaken. Either way I'd ask them "Suppose you weren't confident about that answer. How would you determine the truth?" IIRC, 90% of them said "I'd write this one-liner..." and (I swear) 100% of the one liners would give any reasonable person an impression of an answer that turns out to be incorrect. The ones that said "I'd check the perldoc" were the only ones I'd approve for subsequent interviews.
I hate when you have code that you can't simply read to understand what it does. I'd like to say that probably 99% of the code that I write embodies that, I refuse to use complicated language constructs wherever I can, even if that makes the code longer.
When I was a kid and just started working I would still often code with whatever I came up with initially, but then you go back to that 3 months later and you have to throw the whole thing out because it's impossible to maintain or add to.
On the other hand there are sometimes additions to a language that's just so useful that you have to expand your vocabulary, for example in C# that's happened a few times. One of the notable additions there was LINQ that made manipulating data so much easier. It can become dangerous though, the same as for example with a complicated DB stored procedure.
I write some new Perl/Gtk application a couple times a year. And I use it for automating basic things almost daily. I bet lots of people chose to write in perl for personal projects. It just isn't very visible.
But not that many. And that's why Perl is still Perl. Popularity brings change which means old code stops working.
Perl code from the year 2000 still works in a perl interpreter+libs today. And perl code written today still works on perl interpreter+libs from the year 2000. That incredible stability and reliability is what makes it great. Write something then use it 20 years later and everything just works anywhere you try to run it.
Later I got another role solely because they couldn't find Perl programmers and I wasn't half bad at it. But this was an actual application written by a bunch of people who wanted to write clever code and it was like handing over the keys to a missile depot to a bunch of arsonists. Many thousands LoC. By the end of it, I was told that we need to move to using Java and I could barely contain my relief.
For one-off scripts, still nothing flows like Perl. It is the most interesting language I have coded in, bar none.
[0] correction: Learning Perl, the llama book (thanks @ninkendo)
I was a mediocre developer and student in my CS program and actually considered getting out at a few points. I really loved systems and building solutions though, and I ended up becoming a DBA.
For some reason my mental model resonated with Perl. I was able to use it almost like a writing process, getting my “outline” laid out in Perl and refactoring supplementing with more efficient C code or third party stuff later.
It was cool, i started fixing data integration issues and automating processes around the databases. Eventually a colleague and I basically built an application that made our DR testing failover and failback processes a two-click event. I left that company long ago and I know a bunch of our stuff ran almost 20 years before the system was migrated to AWS.
IT is more industrial and efficient these days. That’s not a bad thing, but I had alot of fun being the kid showing the old people what Linux was and gluing all of these systems to orchestrate them. Unfortunately Perl is an artifact of that era.
The older languages may be artifacts of our era of code, but I'm excited to see what the next wave of documented prompting vibe coding will bring.
I might finally understand what the heck my students' code is doing HA!
Even PHPs prominence (and ease of use) can be traced back to the parts it borrows from Perl; since the web is just manipulating text after all.
I went on a similar journey to you. Started with C, moved to bash, then to perl to replace bash scripts, and you're right that it feels like a super power.
But the most interesting language I've ever coded in was probably Ruby, because it changed the way I look at languages. In Ruby, everything is an object and everything is mutable - this makes the dynamic metascripting possibilities simply absurd.
I don't advocate for these languages anymore, as Perl is easy to become "write once, read never", in a way that's much worse than bash- but I can't help but feel like we have definitely lost a killer language for text processing. Seemingly nobody writes perl anymore.
That's true, but obscures what IMHO is a deeper truth: thinking in perl naturally leads to thinking in "streams of text", which is a kind of general composability that's been largely forgotten in the Unix world.
These days every project has it's own giant list of dependencies, it's own conventions about code structure, it's own list of para-toolchain utilities like linters and formatters. Often it even has it's own set of vscode extensions you pretty much have to use.
Nothing is just a tool anymore.
There was something so wonderfully down-to-earth and humane about those O'Reilly books. But actually most IT books had something casually playful and creative about them. A quality you rarely find these days.
Learning Perl (the llama book) was my first programming book, and it taught me programming in general, not just Perl, and I still think it was an amazing book. Very approachable, helpful to beginners, I read it cover to cover. There’s also “learning Perl objects references and modules”, which is a bit of a sequel to Learning Perl. Those two books helped me land my first gig as a Perl programmer, and started my whole career.
But I couldn't understand anything of it.
Hence I became an economist within cyber security instead.
Tcl was by far the easiest to use, while Perl sat on the other end of the spectrum for both skill needed to write it as well as to just be able to read it.
As the mantra goes, "Power, but at what cost?!?"
And what's interesting is that it's complicated in a lot of similar ways, in terms of thought space, as perl was. Perl was great because you could be clever, where you really couldn't in more pedestrian languages.
I find (and I'm not sure this is a good thing) that my python output these days feels very clever sometimes...
I worked at a VFX place that was held together by loads of perl, written over a good 15 years. Some of it was clever, but most of it was plain readable scripts.
The key to keeping it readable was decent code reviews and someone ripping the piss out of you for making unreadable soup.
Its all python nowadays, I do miss CPAN, but I don't miss perls halfarsed function args.
However for the longest time, the documentation for perl was >> than python. At the time python doc were written almost exclusively for people who knew how to python. Perl docs assumed you were in a hurry and needed an answer now, and if you were still reading by the end assumed you either cared or were lost and needed more info.
With the rise of datascience, python has lost its "oh you should be able to just guess, look how _logical_ the syntax is" to "do it like this."
> perlcritic is a Perl source code analyzer. It is the executable front-end to the Perl::Critic engine, which attempts to identify awkward, hard to read, error-prone, or unconventional constructs in your code. Most of the rules are based on Damian Conway's book Perl Best Practices. However, perlcritic is not limited to enforcing PBP, and it will even support rules that contradict Conway. All rules can easily be configured or disabled to your liking.
https://metacpan.org/dist/Perl-Critic/view/bin/perlcritic
It helped me a lot. I think every Perl developers should use it, might help to avoid headache later on. Be careful with severity level "brutal" and "cruel" and "harsh", however. I think "gentle" works in many cases. That said, I used "brutal" and I only fixed the legitimate issues. "brutal" helped me write a proper POD, for one, as "gentle" does not complain about that.
In the past when I used perl, I did the same thing.
But I came to learn one thing about perl - its good point is its bad point.
When I used it, perl was the highest level language I ever used. It was expressive, meaning I could take an idea in my head, and implement it in perl with the least friction of any language.
When I worked with other people's perl, I found they were mindful and cared about what they were doing.
But the way they thought was sometimes almost alien to me, so the expression of their thinking was a completely different type of perl, and it was lots less readable to me. And frequently the philosophy of what they wrote was backwards or inside out from what I would do.
Now I have replaced perl with python day to day and although implementation of code seems a few steps removed from my thinking, it seems that other people's code is more easily read and understood. (this is just my opinion)
Both approaches have their merits. Like with maven, where I once saw a question on a forum that was like "how do I do X?" and the reply was basically "You can't, don't try to do so, as that's wrong".
At some point when the LLM fails, you'll need a real programmer to figure it out.
But IMO, LLMs are code generation, and code generation always fails at some point when the pile of generated code topples, no matter the language.
The amount of bad enterprise LLM code that will be cranked out in the next few years is going to be fascinating to watch.
You mean you don't like writing things like...
It just grinds my gears that _I_ need to check to see if the caller has given me all the required bits. That seems like something the language should do.
I understand that it does give you a lot of flexibility, but Hnnnnnnnnn
(from what I recall object oriented perl doesn't give you this flexibility, but I'm not sure, as I never really did it. )
1. shift only shifts off the first element.
2. (if classify this as a bug) using $a and $b are frowned upon because they're the default variables when using sort.
All that old code still works, though.
Honestly, I think all CS/EE programs should require an OWASP course and that coding should require regular continuing education that includes defensive coding practices for correctness, defined behavior, and security.
Suppose the Table family type their son Bobby's name into a form. The Perl program now has a "tainted" string in memory - "Robert'; DROP TABLE Students --".
The Perl code passes this string through a regex that checks the name is valid. Names can include apostrophes (Miles O'Brien) and hyphens (Jean-Luc Picard) along with spaces and normal ASCII letters, so the regex passes and the string is now untainted.
I think "parse don't validate" doesn't help in this example, but naively the regex would not check whether a name is valid but "extract all parts of the string that are provenly safe".
Which is not reasonable for SQL statements, so someone invented prepared statements.
I think the idea is that the Regex parsing forces the programmer to think about what they're doing with the string and what the requirements for the non-tainted variable are.
For example, a file name string would not allow unescaped directory separators, dots, line breaks, null bytes (I probably got most details wrong here...) and the regex could remove these or extract the substring until the first forbidden character.
Sure, this cannot prevent mistakes.
But the idea, I think, is not to have a variable "safeUserName", instead a "safeDbStatement" one.
In PHP, you can construct objects directly from $_GET/POST (and erase everything from these vars to make sure they are not used directly), then lean on data types to make sure that these values are not used in a wrong place.
ps: ah well, that was fast https://en.wikipedia.org/wiki/Taint_checking#History :) (1989)
Perl
- Was an easy jump from bash to Perl
- Perl never felt like it "got in the way"
- was WAY too easy to write "write only code"
- that being said, I learned Java first and most people found MY Perl code to be very legible
- regexes as first class citizen were amazing
- backwards compatible is GREAT for older systems still running Perl (looking at you banks and some hedge funds)
Python
- Forced indentation made it MUCH easier to read other people's code
- everything is an object from day one was much better than "bless" in Perl
- no standard way of setting up an environment for MANY years hurt
- sklearn and being taught in universities were real game changers
Serious question: is that solved? I still see a forest of options, some of which depend on each other, and at last count my laptop has 38 python binaries. What's the standard way?
https://docs.astral.sh/uv/
It depends on what "setting up" means.
Creating an environment, given that the Python binary it will use is already installed, is trivial (standard library functionality since late 2012). So is choosing which environment to use. So is installing pre-built packages, and even legacy source packages are pretty easy (but slow, and installation runs arbitrary code which is entirely needless for these) when they only contain pure Python code. Even dependency resolution is usually not too bad.
The big problems are things like
* building multi-language packages from source locally, because this is expected to set up temporary local build environments (and build tools have to avoid recursion there)
* using external non-Python dependencies (essentially unsolved, and everyone works around this by either vendoring stuff or by not declaring the dependency and failing at runtime) — see https://pypackaging-native.github.io/ for an overview of the problems and https://peps.python.org/pep-0725/ for what they're trying to standardize to deal with it
* dealing with metadata for source packages; in the really general case you have to build the source to get this (although the package-building API now provides a hook so that build backends can specifically prepare metadata). This is mainly because some packages have dependencies that depend on very particular platform details that (apparently) can't be expressed with the "environment marker" scheme in standard metadata (https://peps.python.org/pep-0508/#environment-markers)
* and, of course, figuring out which packages need to be in your environment (Python won't decide for you what your direct dependencies are) and managing that environment over time. The reason all these other tools popped up is because Pip only installs the packages and offers very basic environment inspection; it's only now starting to do anything with lockfiles, for example, now that there is finally a standard for them (https://peps.python.org/pep-0751/).
But if you mean, is there a standard toolchain that does everything and will be officially blessed by the core language developers, then no, you should not ever expect this. There is no agreement on what "everything" entails, and Python users (a large fraction of which don't fit the traditional image of a "developer" at all) have widely varying workflows and philosophical/aesthetic preferences about that. Besides which, the core language team doesn't generally work on or care about the problem; they care about the interpreter first and foremost. Packaging is an arms-length consideration. Good news, though: the Python Packaging Authority (not at all authoritative, and named with tongue firmly in cheek, but a lot of people didn't get that) is stepping up and working on official governance (see https://peps.python.org/pep-0772/).
> at last count my laptop has 38 python binaries
Something has gone very wrong (unless you're on Windows, such that admin rights would be needed to create symlinks and by default `venv` doesn't try). To be clear, I mean with your setup, not with the tooling. You should only need one per distinct version of Python that your various environments use. I'd be happy to try to help if you'd like to shoot me an email (I use that Proton service, with the same username as here) and give more details on how things are currently set up and what you're trying to accomplish that way.
I will say coming from years of perl that python had a refreshing amount of "batteries included" via the standard library.
It was only rarely that my code needed "outside help", usually something like requests or numpy.
I suspect this is because I used python in the same environment as perl, automating unixy kinds of things.
I suspect "setting up an environment" is because python has been so successful, becoming an enormously broad general language.
Not, if you know what you are doing.
Most languages have a decent enough framework or two that the differences between using them for different use cases may be closer than many folks realize vs whatever we hear about as the new hotness through the grapevine.
A mess can be made in a lot of languages, and a long time ago, it was even easier, except some of that code worked and it didn't get touched for a long time.
The enormous reason that I see is the insistence, from Larry Wall and others, on a bottom-up "community" transition from Perl 5 to Perl 6. The design process for Perl 6 was announced at a Perl conference in 2000 [1]; 15 years later, almost every Perl user was still using Perl 5. The inability of the Perl community to push forward collectively in a timely way should be taken by every other language community as a cautionary tale.
Tim O'Reilly made a secondary point that may also be important. For a long time, Perl books were O'Reilly's biggest sellers. But the authors of those titles didn't act on his suggestion that they write a "Perl for the Web" book (really a Perl-for-CGI book). Books like that eventually came, but the refusal of leading authors to write such a book may have made it easier for PHP to get a foothold.
[1] https://en.wikipedia.org/wiki/Raku_(programming_language)#Hi...
It succeeded because it was a beautiful and horrible combination of those tools/languages. If you know those things, Perl is really easy to bang together and generate a webpage or use to automate administrative task.
Given Larry Wall's education, perhaps we shouldn't be surprised that Perl underwent a linguistic evolution from pidgin[1] to creole[2].
I came to Perl a bit late, not being a super adept user of Unix and only having written C in school. I'd put myself in the category of Perl developer that was part of the second "creole language" phase of Perl's development. I learned a ton of Unix and got better at C by learning Perl.
While Perl's mixed nature made it successful, when the world of web development expanded to include more people without the necessary background to benefit from the admixture, it went from asset to hinderance. All that syntax went from instantly familiar to bizarre. A perfect example of this are the file test operators[3].
The Perl 6 struggles definitely added to the difficulties posed by the changing nature of the web dev community. They created enough uncertainty that tons of people asked themselves "Why should I learn Perl 5 when Perl 6 is just around the corner?". That slowed adoption from in the 2001-2010 timeframe while Python and Ruby grew rapidly.
Rakulang is kind of magical. Writing it is like using a language given to us by aliens. I wish it was getting more uptake because it is fun and truly mind expanding to write.
In the end, I think the shifting nature of the community was a larger factor in the decline of Perl than the slow Perl 6 rollout and failure of messaging around it.
I still love writing Perl 5 and I wish I got to do more of it.
[1] https://en.wikipedia.org/wiki/Pidgin [2] https://en.wikipedia.org/wiki/Creole_language [3] https://perldoc.perl.org/functions/-X
It's become a poster child for how not to do a major transition.
KDE3/4, GNOME 2/3, Python 2/3 transitions all benefited from this hindsight (still experiencing a lot of pain themselves).
Raku might be an interesting language (I haven't dug deep), but it's not Perl. Larry et al should've just called it separately from the start and allow others to carry on the Perl torch. They did this too late, when the momentum was already dead.
Perl 5 was a product of its time, but so was Linux, C, Python 2 or PHP3, and they're still very much relevant.
> The inability of the Perl community to push forward collectively in a timely way should be taken by every other language community as a cautionary tale.
I think this is a good point that I hadn't considered before.
I think Perl stopped being able to attract new users. There is always going to be users leaving. If they aren't replaced, you will slowly shrink.
I think the point you raised is part of why they couldn't attract new users. I also think people asked themselves "why chose perl now, if I know I need to re-write when Perl6 comes?" and decided Perl5 was bad choice. I also think the fact Perl had this reputation for being ugly, difficult, and "write only line noise" kept people from even considering it, even if that reputation didn't match production codebases.
Not saying your anecdote is inaccurate, but my perception around that time was that "Learn PHP in 24 Hours" was a lot hotter than O'Reilly's Perl books - so it may have just been luck, marketing, a flashier title, or even just that PHP was better suited for what people wanted to learn and do.
[0] https://perldoc.perl.org/perldsc
1. The rise of other web languages that did what Perl was good at but better.
Perl was probably a fine language for sysadmins doing some text munging. Then it became one of the first languages of the web thanks to Apache and mod_perl, but it was arguably never great at that. Once PHP, Python, Ruby, and (eventually) JavaScript showed up, Perl's significant deficiencies for large maintainable codebases made it very hard to compete.
In many cases, a new, better language can't outcompete an old entrenched one. The old language has an ecosystem and users really don't like rewriting programs, so that gives it a significant competitive advantage.
But during the early rise of the web, there was so much new code being written that that advantage evoporated. During the dot com boom, there were thousands of startups and millions of lines of brand new code being written. In that rare greenfield environment, newer languages had a more even playing field.
2. Perl 6 leaving its users behind.
As a language maintainer, I feel in my bones how intense the desire is to break with the past and Do Things Right This Time. And I'm maintaining a language (Dart) that is relatively new and wart-free compared to Perl. So I can't entirely blame Wall for treating Perl 6 as a blank check to try out every new idea under the sun.
But the problem is that the more changes you make to the language, the farther you pull away from your users and their programs. If they can't stay with you, you both die. They lose an active maintainer for the core tools they rely on. And you lose all of their labor building and maintaining the ecosystem of packages everyone relies on.
A racecar might go a lot faster if it jettisons all the weight of its fuel tank, but it's not going to faster for long.
I think the Perl 6 / Raku folks go so excited to make a new language that they forgot to bring their users with them. They ran ahead and left them behind.
3. A wildly dynamic language.
If you want to evolve a language and keep the ecosystem with you while you do it, then all of that code needs to be constantly migrated to the new language's syntax and semantics. Hand-migrating is nightmarishly costly. Look at Python 3.
It's much more tractable if you can do most of that migration automatically using tools. In order to do that, the tools need to be able to reason in detail about the semantics of a program just using static analysis. You can't rely on dynamic analysis (i.e. running the code and seeing what it does) because it's just not thorough enough to be safe to rely on for large-scale changes to source code.
Obviously, static types help a lot there. Perl 5 not only doesn't have those, but you can't even parse a Perl program without running Perl code. It is a fiendishly hard language to statically analyze and understand the semantics of.
So even if the Perl 6 folks wanted to bring the Perl 5 ecosystem with them, doing so would have been extremely challenging.
I would say this is a case study in a programming language tragedy, but I honestly don't even know if it's a bad thing. It may be that programming languages should have a life cycle that ends in them eventually being replaced entirely by different languages. Perhaps Perl's time had simply come.
I am grateful for all of the innovative work folks have done on Perl 6 and Raku. It's a cornucopia of interesting programming language ideas that other languages will be nibbling on for decades.
Perl did not only have mod_perl. It also had the same kind of frameworks that made Ruby and Python great for web development. It was called Catalyst and was production ready around the same time as RoR and Django.
The real reason why perl failed on this front, IMHO, is that the language makes it super unergonomical to define any nested data structures. In Javascript, Ruby and Python, a list of dictionaries is just some JSON-like syntax: { "x": [...], "y": [...] }
In perl you have to deal with scalars, references, references to scalar, value references, ... and you have the sigils that mean different things depending on what the variable contains. I mean, I spent significant time writing perl and never figured this out.
In a world where you just want a CRUD to load/save a piece of structured data, the ones that let you operate on the data and keep your sanity wins.
Should people just not make new languages any more?
Should new languages not bear deep resemblances to existing ones?
> Hand-migrating is nightmarishly costly. Look at Python 3.
> It's much more tractable if you can do most of that migration automatically using tools.
There was and is in fact abundant tool support for this in Python. The `2to3` script shipped with Python and the corresponding `lib2to3` was part of the standard library through 3.11. The third-party `six` compatibility libraries are still downloaded from PyPI more often than NumPy.
> In order to do that, the tools need to be able to reason in detail about the semantics of a program just using static analysis.
The experience of these tools users disagrees, from what I've seen. They produce ugly code in places, but it generally passes the tests. Yes, manual fixes are often necessary, but there were source packages published in that era (and it's not like they disappeared from PyPI either) that would detect the Python version in `setup.py`, run the tool if necessary at installation time, and have happy users.
Perl's language design fetishizes solving self-inflicted puzzle after puzzle after puzzle.
https://www.artima.com/weblogs/viewpost.jsp?thread=147358
Language Design Is Not Just Solving Puzzles
by Guido van Rossum, February 10, 2006
Summary: An incident on python-dev today made me appreciate (again) that there's more to language design than puzzle-solving. A ramble on the nature of Pythonicity, culminating in a comparison of language design to user interface design.
Some people seem to think that language design is just like solving a puzzle. Given a set of requirements they systematically search the solution space for a match, and when they find one, they claim to have the perfect language feature, as if they've solved a Sudoku puzzle. For example, today someone claimed to have solved the problem of the multi-statement lambda.
But such solutions often lack "Pythonicity" -- that elusive trait of a good Python feature. It's impossible to express Pythonicity as a hard constraint. Even the Zen of Python doesn't translate into a simple test of Pythonicity.
In the example above, it's easy to find the Achilles heel of the proposed solution: the double colon, while indeed syntactically unambiguous (one of the "puzzle constraints"), is completely arbitrary and doesn't resemble anything else in Python. A double colon occurs in one other place, but there it's part of the slice syntax, where a[::] is simply a degenerate case of the extended slice notation a[start:stop:step] with start, stop and step all omitted. But that's not analogous at all to the proposal's lambda <args>::<suite>. There's also no analogy to the use of :: in other languages -- in C++ (and Perl) it's a scoping operator.
And still that's not why I rejected this proposal. If the double colon is unpythonic, perhaps a solution could be found that uses a single colon and is still backwards compatible (the other big constraint looming big for Pythonic Puzzle solvers). I actually have one in mind: if there's text after the colon, it's a backwards-compatible expression lambda; if there's a newline, it's a multi-line lambda; the rest of the proposal can remain unchanged. Presto, QED, voila, etcetera.
But I'm rejecting that too, because in the end (and this is where I admit to unintentionally misleading the submitter) I find any solution unacceptable that embeds an indentation-based block in the middle of an expression. Since I find alternative syntax for statement grouping (e.g. braces or begin/end keywords) equally unacceptable, this pretty much makes a multi-line lambda an unsolvable puzzle.
And I like it that way! In a sense, the reason I went to considerable length describing the problems of embedding an indented block in an expression (thereby accidentally laying the bait) was that I wanted to convey the sense that the problem was unsolvable. I should have known my geek audience better and expected someone to solve it. :-)
The unspoken, right brain constraint here is that the complexity introduced by a solution to a design problem must be somehow proportional to the problem's importance. In my mind, the inability of lambda to contain a print statement or a while-loop etc. is only a minor flaw; after all instead of a lambda you can just use a named function nested in the current scope.
But the complexity of any proposed solution for this puzzle is immense, to me: it requires the parser (or more precisely, the lexer) to be able to switch back and forth between indent-sensitive and indent-insensitive modes, keeping a stack of previous modes and indentation level. Technically that can all be solved (there's already a stack of indentation levels that could be generalized). But none of that takes away my gut feeling that it is all an elaborate Rube Goldberg contraption.
Mathematicians don't mind these -- a proof is a proof is a proof, no matter whether it contains 2 or 2000 steps, or requires an infinite-dimensional space to prove something about integers. Sometimes, the software equivalent is acceptable as well, based on the theory that the end justifies the means. Some of Google's amazing accomplishments have this nature inside, even though we do our very best to make it appear simple.
And there's the rub: there's no way to make a Rube Goldberg language feature appear simple. Features of a programming language, whether syntactic or semantic, are all part of the language's user interface. And a user interface can handle only so much complexity or it becomes unusable. This is also the reason why Python will never have continuations, and even why I'm uninterested in optimizing tail recursion. But that's for another installment.
However I would say an awful lot of the professionals I was around already thought Perl had a bad smell even in the 1990s. It was definitely looked down up on in academia by then. Maybe not in an IT department or Math department but in the CS department it was. It was used by IT guys, and QA guys, and somebody gluing some tools together. An awful lot of people thought it was unacceptable for it to be in serious production code or anything that had to be long term maintainable or be worked on by a team of people larger than size = 1. Your perception of it definitely came from where you were at the time and how you encountered it. If you were on a team producing software for sale that involved a bunch of people and you had version control and QA and everything Perl was already not your thing.
Anyhow, toward the end of my time there, I had to interview candidates. Because I came to believe that the above is how one had to work with Perl, I took to asking the ones who said they knew Perl, "Can the reverse builtin reverse a list?" (I won't spoil the answer for you.) Most would answer with "Yes" or "No"; 75% of them were mistaken. Either way I'd ask them "Suppose you weren't confident about that answer. How would you determine the truth?" IIRC, 90% of them said "I'd write this one-liner..." and (I swear) 100% of the one liners would give any reasonable person an impression of an answer that turns out to be incorrect. The ones that said "I'd check the perldoc" were the only ones I'd approve for subsequent interviews.
When I was a kid and just started working I would still often code with whatever I came up with initially, but then you go back to that 3 months later and you have to throw the whole thing out because it's impossible to maintain or add to.
On the other hand there are sometimes additions to a language that's just so useful that you have to expand your vocabulary, for example in C# that's happened a few times. One of the notable additions there was LINQ that made manipulating data so much easier. It can become dangerous though, the same as for example with a complicated DB stored procedure.
But not that many. And that's why Perl is still Perl. Popularity brings change which means old code stops working.
Perl code from the year 2000 still works in a perl interpreter+libs today. And perl code written today still works on perl interpreter+libs from the year 2000. That incredible stability and reliability is what makes it great. Write something then use it 20 years later and everything just works anywhere you try to run it.
That's why I chose Perl.