Readit News logoReadit News
Animats · 8 years ago
Things to think about for the near future of programming languages:

- The borrow checker in Rust is a great innovation. Previously the options were reference counts, garbage collection, or bugs. Now there's a new option. Expect to see a borrow checker in future languages other than Rust.

- Formal methods are still a pain. The technology tends to come from people in love with the theory, resulting in systems that are too hard to use. Yet most of the stuff you really need to prove is very dumb. X can't affect Y. A can't get information B. Invariant C holds everywhere outside the zone (class, module, whatever) where it is transiently modified. No memory safety violations anywhere. What you really need to prove, and can't establish by testing, is basically "bad thing never happens, ever". Focus on that.

- The author talks about "Java forever". It's more like Javascript Everywhere.

- Somebody needs to invent WYSIWYG web design. Again.

- Functional is OK. Imperative is OK. Both in the same program are a mess.

- Multithread is OK. Event-driven is OK. Coroutine-type "async" is OK. They don't play well together in the same program. Especially if added as an afterthought.

- Interprocess communication could use language support.

- We still can't code well for numbers of CPUs in triple digits or higher.

- How do we talk to GPU-type engines better?

PaulRobinson · 8 years ago
I disagree with a few of your thoughts, but they're good thoughts!

* Javascript everywhere is a function of low barrier-to-entry for it, but almost everybody agrees it is flawed as a language. If that's the future, we are screwed as an industry. One thing I've noticed (and I say this as a guy who wrote Ruby for 10+ years), is that type safety is becoming a hugely desired feature for developers again.

* WYSIWYG web design (a la Dreamweaver) died off a little because the tools saw a web page as standing in isolation. We know however that isn't interesting on its own - it needs to be hooked up to back-end functionality. Who is producing static HTML alone these days? In the case of SPAs it needs API integration. In the case of traditional web app dev, it needs some inlining and hooks to form submission points and programatically generated links to other "pages". Making that easier is the hard part - seeing a web document as an artefact of an output from a running web application container.

* Multi-threaded, event-driven, coroutine-type patterns are fine in Go, to my eye. What's making you think we can't mix this up with the right type of language and tooling support?

* Is it that we can't code well for CPU counts > 100 or that the types of problems we're looking at right now that need that level of parallelism tend to be targeted towards GPUs or even ASICs? I think I'd need to see the kind of problems you're trying to solve, because I'm not sure high CPU counts are the right answer.

* Talking to GPU-type engines is actually pretty simple, we will deal with it the same way we deal with CPU-type engines: abstraction through a compiler. Compilers over time will learn how to talk GPU optimally. GPU portability over the next 20 years will be a problem to solve as CPU/architecture portability was over the last 40.

tluyben2 · 8 years ago
> Javascript everywhere is a function of low barrier-to-entry for it, but almost everybody agrees it is flawed as a language

Everybody is everybody who has used other languages intensively or is into programming languages or, the most negative parties against JS, people who are into formal methods. But 'everybody'; I get often downvoted to hell for being negative on JS on Reddit. And i'm not using a baseball bat; i'm subtle about it as I don't care for language wars. Use what you want, but please don't say it's the best thing to happen to humanity. But no; 'everybody' (as in headcount) thinks it is the best thing that happened to humanity and that other languages should die because you can write everything in JS anyway.

blipblop · 8 years ago
> but almost everybody agrees it is flawed as a language. If that's the future, we are screwed as an industry.

What language is not flawed? And why are we "screwed"? I don't get this FUD...there are more important things than language-choice such as dependency management system + community + ecosystem. JS lets you get on with the job and get things done quickly. You need performance - use C/C++ bindings. Its been clear for a long time that JS is the safest long-term choice and is slowly creeping into every other language's castle.

ryanmarsh · 8 years ago
type safety is becoming a hugely desired feature for developers again

I don’t think anyone ever hated type safety, I think they hated verbose syntax and unfortunately conflated the two.

Now we’re finding a happy medium with compiler type inference and people are like, wait what?

aaron-lebo · 8 years ago
It's really not that bad. Modern JS isn't ideal but it has many parts of it that are as good as or better than Ruby or Python, but most people don't act like they are horrible languages. I'd gladly use it over either.
harimau777 · 8 years ago
What causes you to say that JavaScript is a flawed language? Not trying to be snarky or saying you're wrong, just want to better understand your reasoning.

It seems to me that at one point JavaScript had a lot of confusing/bad design decisions but that more recent changes have largely eliminated them. For example, I almost never have to worry about "this" anymore.

I recently worked on a project using TypeScript and I really appreciated how it changed a lot of the bugs from being runtime to compile time. I could definitely see how that is a big flaw, but it seems like the community is developing solutions.

marcosdumay · 8 years ago
About WYSIWYG, we are missing standards on our APIs. SOAP was going that way but it was way too much, way too early.

Either that, or an ASP.Net view where the backend is interacting with the user through a browser. But that doesn't work well. It's much better to standardize the backend API than the entire frontend.

pjmlp · 8 years ago
> WYSIWYG web design (a la Dreamweaver) died off ....

Which is why what we actually need is a la Delphi

d13 · 8 years ago
Unforuntately JavaScript is the present and, yes, we're screwed.
flavio81 · 8 years ago
>* If that's the future, we are screwed as an industry.*

Agree 100%. However, i find Javascript pretty good for quick and dirty "MVPs".

stdbrouw · 8 years ago
> Functional is OK. Imperative is OK. Both in the same program are a mess.

I guess you're referring to languages that are not-quite-without-side-effects, but I'd say the biggest influence the functional paradigm has had on other (imperative) programming languages is actually the addition of higher-level data manipulation operations. The functional utility libraries you see for languages such as Python and JavaScript exist solely because sometimes functional idiom like "map this onto this" or "take that only if this holds" or "let's make a new function by pre-filling these function arguments" is more intuitive than having for loops all over the place. And it mixes just fine with other imperative code.

theoh · 8 years ago
Presumably animats is talking about things like "do notation" in Haskell, not innocuous cases of function composition or first-class functions in imperative languages.
Chris_Newton · 8 years ago
I guess you're referring to languages that are not-quite-without-side-effects, but I'd say the biggest influence the functional paradigm has had on other (imperative) programming languages is actually the addition of higher-level data manipulation operations.

I tend to agree. The two big wins from a more “functional” style, from my perspective, are the clear emphasis on the data and the way effects are more explicit and controlled.

I want things like higher order functions and algebraic data types and powerful interface/class systems. With those I gain many useful ways to represent and manipulate data that I don’t have in most languages today.

In a world where most mainstream languages are just discovering filter, map and reduce on their built-in list types, a language like Haskell gives me, out of the box, tools like mapAccumWithKey that work with any data structure as long as it provides the specific, clearly defined interfaces required for the algorithm to make sense.

In a world where most mainstream languages are worrying about accidentally derefencing nulls or whether there’s a proper type for enumerating a set of values, functional languages routinely use algebraic data types and pattern matching, and some go much further.

Arguably, these aren’t really functional concepts at all, in that you could have them just as well in an imperative language. However, in practice it is the functional-style languages that are far ahead in these areas, because they are a natural way to work in languages that emphasize composition of functions and careful, explicit handling of data.

I also want to know that I’m not applying effects on resources unintentionally, or sharing resources without proper synchronisation, or trying to apply effects on resources in an invalid order, or failing to acquire or release resources properly, or leaving resources in a mess if something aborts partway through an intended sequence of effects. This aspect goes a lot further than just making data constant-by-default, but it certainly doesn’t require trying to remove state and effects altogether. These things aren’t so much about making my code more expressive but about stopping me from making mistakes.

I want a language that will stop me from accidentally modifying a matrix in-place in one algorithm while some other algorithm has a reference to that matrix that it assumes won’t change. I don’t want a language that will stop me from ever modifying a matrix in-place. Sometimes modifying things in-place is useful.

I want a language that will be explicit about the initialisation and lifetime and clean-up of a locally defined cache or temporary buffer. I don’t want a language that tells me I can’t cache a common, expensively computed result 15 levels deep in my call hierarchy without changing the signature of every function on every possible path to that point in the code, or a language that will let me do whatever I want but only if I use some magic “unsafe” keyword that forfeits most or all useful guarantees about everything else in the universe as well.

In this respect, my personal ideal programming style for most tasks very much would be a hybrid of imperative/stateful and functional/pure styles, with the key point being that the connections between them should be explicit, obvious and deliberate.

TeMPOraL · 8 years ago
Great points, but I have a question about one of them in particular:

> - Functional is OK. Imperative is OK. Both in the same program are a mess.

What do you mean by that?

My experience is that functional alone is impossible, since the only useful thing a program can do is through state changes; imperative is a-OK, and functional+imperative in the same program is the best way to do things (i.e. well-defined stateful areas surrounded by lots of functional code).

s4vi0r · 8 years ago
Once you get over the initial learning curve of the functional/pure approach to state/IO, its far superior to imperative imo. You don't need to reason about global state - because everything is explicit, including passing around your state, you never have to worry about "what if someone else or some other code somewhere is touching this" again.
chrisseaton · 8 years ago
> My experience is that functional alone is impossible, since the only useful thing a program can do is through state changes

Some programs are simply pure functions. If your program entry point accepts a string and returns a string, then you can write useful things, such as compilers, image processing, grep, etc, entirely as pure functions.

pdimitar · 8 years ago
Erlang / Elixir are 100% immutable inside the code (no var can ever be mutable).

However, they have a mutable in-process cache (living in the BEAM VM) that many people have written libraries around for stuff like mutable arrays, matrices, graphs, and many others.

It goes like this: do 99.5% functional programming and you have the imperative / stateful tools for when they are absolutely necessary.

This works. Extremely well.

notacoward · 8 years ago
> Interprocess communication could use language support.

+100

There have been a few attempts in this direction, but they have mostly been couched in the form of a whole new language that also embodies at least a half dozen other novel (i.e. unfamiliar) ideas as well. Contra the OP, I think this is an area where incrementalism does work. Extending a language people already know with a few constructs for IPC, much like fork/join or async/await have done for concurrency, is much more appealing. I've been thinking about this for a few years now. Maybe I should write some of that down and let people pick at it.

Twisol · 8 years ago
> - Interprocess communication could use language support.

I'm really interested in hearing more of your thoughts on this, since it touches on one of my personal research interests. What kind of language support for IPC are you looking for? Something in the vein of session types [1], which checks that two parties communicate in a "correct" sequence of messages?

[1] https://dl.acm.org/citation.cfm?id=1328472

Animats · 8 years ago
Lower level than that. Languages should have marshaling support. Marshaling is a low-level byte-pushing operation for which efficient hard machine code can be generated.

I'd suggest offering two forms of marshalling - strongly typed and non-typed. Strongly typed marshaling means sending a struct to something that expects exactly that struct. That will usually be another program which is part of the same system. Structs should be able to include variable-length items for this purpose, so you can send strings. Checking involves something like function signature checking at connection start. This should have full compiler support.

Non-typed marshalling includes JSON and protocol buffers. The data carries along extensive description information, and the sender and recipient don't have to be using exactly the same definition.

Both are needed. Non-typed marshalling is too slow for systems which are using multiple processes for performance. Typed marshalling is too restrictive for talking to foreign systems.

catnaroek · 8 years ago
> Previously the options were reference counts, garbage collection, or bugs.

I think you mean “lack of memory safety” rather than bugs. Garbage collection doesn't magically free you from finalization bugs, it just makes their consequences less disastrous.

> Formal methods are still a pain. The technology tends to come from people in love with the theory, resulting in systems that are too hard to use.

Clearly, the solution is getting rid of people, but it isn't entirely clear on which end to get rid of people.

Tarean · 8 years ago
Multi threaded + async isn't a problem if the runtime system supports it.

Async + Event driven works transparently with higher order frp with the usual tradeoffs of first order vs higher order frp.

Multi threaded + Event driven seems like a mess as soon as state gets involved. Some transaction based system might be interesting.

Dead Comment

YZF · 8 years ago
One area I haven't seen any good solutions for is the interleaving of tests with production code. I think we need better ways to express tests without cluttering the production code and excessive mocking. What I'd like to do, and really can't in any language/tooling that I know of, is to extract some arbitrary subset of the code and surround it with tests or a test harness. I'd also like to specify various injection points for tests right as I'm writing the code in a way that doesn't affect the production code and contributes to its readability. Perhaps this is pieces of server and client code or various services all tested together. When I refactor the code I want the tests to "refactor" with the code. I don't want to need to rewrite my tests... Automatic test generation is another interesting area (beyond the basic table or algorithm stuff we might do in tests today).

It's almost like the "problem" isn't the languages themselves, it's the tooling around the languages. I want to be able to do a lot more than "compile", "run", with my code... I can imagine machine learning driven tooling being able to automate a lot of the mechanical aspects of code writing beyond the simple generate a closing bracket that an IDE can do...

WalterBright · 8 years ago
It many not be exactly what you want, but D supports unit tests embedded directly in the production code:

  https://dlang.org/spec/unittest.html
It's been a real game changer for us in improving the quality of the code.

geogriffin · 8 years ago
William Byrd's work in program synthesis is an interesting take on this, where an IDE can, guided by test cases written for a function, actually auto-complete code itself whilst writing the function, or tell the programmer when they have something wrong by violating a test case. Of course it is impractical right now, but a good step nonetheless.. It (Barliman) is demoed near the end of this video: https://youtu.be/OyfBQmvr2Hc
zmonx · 8 years ago
This is indeed an extremely interesting approach.

It was pioneered in the context of logic programming and Prolog by Ehud Shapiro in his 1982 PhD thesis "Algorithmic Program Debugging". The thesis was published as an ACM Distinguished Dissertation by MIT press and is available online from:

http://cpsc.yale.edu/sites/default/files/files/tr237.pdf

Together with Leon Sterling, Ehud Shapiro later wrote a very important introductory Prolog book called "The Art of Prolog".

mrkgnao · 8 years ago
MagicHaskeller[0] (which seems to sadly be offline at present) is a similar project that infers Haskell functions from properties like

   reverse "abcde" = "edcba"
Other, older projects include Exference[1] and Djinn[2], in decreasing order of power.

Also, this is really similar to how Idris and Agda work, except they use expressive types to generate the code (using the Emacs modes) rather than test cases.

[0]: http://nautilus.cs.miyazaki-u.ac.jp/cgi-bin/MagicHaskeller.c...

[1]: https://github.com/lspitzner/exference/

[2]: http://www.hedonisticlearning.com/djinn/

mpweiher · 8 years ago
> interleaving of tests with production code

> tests without cluttering the production code and excessive mocking

IMHO, tests should be an integral part of production code. In MPWTest[1], tests are typically expressed on the class side of the class under test, in a category called Testing, though it's easy to override that. This solves 2 of my major annoyances with xUnit-style testing: dual class/test hierarchies and (with static typing) the need to add public interface for the tests, which don't have privileged access.

Mocking in particular and stubbing should be eliminated as much as possible, but this is not a programming language issue[2], more a "not making architectural assumptions too early" issue.

[1] https://github.com/mpw/MPWTest

[2] http://blog.metaobject.com/2014/05/why-i-don-mock.html

yogthos · 8 years ago
I recommend taking a look at Clojure Spec https://clojure.org/about/spec or Racket Contracts https://docs.racket-lang.org/guide/contracts.html
polymeris · 8 years ago
I get clojure.spec is primary a way to define a schema and validate against it, and that you can use it for automatic test generation, too. But I feel I fail to grok the whole extent of the possibilities it offers. Does it address any of the other things YZF mentions? Specially, the "prevent excessive mocking" part?

Deleted Comment

enobrev · 8 years ago
I can imagine something like a code-collapse indicator next to every method in a class (or function or code block, whatever), which would expand tests that can be run on-the-fly - even while you type. Kind of like how comments can be collapsed in some IDEs.

Technically, these could very well be simple commented annotations that point to a separate .test file or something, but are shown in-line by your favorite IDE or code-editor.

tiuPapa · 8 years ago
Doesn't Rust and cargo do this kinda well?
zbraniecki · 8 years ago
I really like how rust does unit test in line and in docs.
rexpop · 8 years ago
Forgive me, this is going to sound snarky but I promise I am earnest:

> I'd also like to specify various injection points for tests right as I'm writing the code

You mean, like methods/functions?

> extract some arbitrary subset of the code and surround it with tests

You mean, like modules/classes?

TeMPOraL · 8 years ago
Kind of, but three big problems in practice are:

- runtime state you need to initialize if your code is written in a stateful manner

- other methods/functions in different modules/classes the code you want to test calls out to

- the fact that method/function and module/class separation is (and IMO always should be) primarily driven by needs of production architecture, not testing, means that it may not be perfect for testing needs

Add to that the failure of testing tools and methodologies (like "don't test private methods" meeting "separate out duplicated or complex code into private methods"), and I feel the problems are real.

----

Here's an idea that just popped into my head right now: how about "hash-based testing"? You take a code you want to test, like:

  private Integer foo(String bar, Frobnicator quux) {
    String frob = quux.invokeMagic(bar);
    return memberApi.transform(frob);
  }
and turn it into:

  Method cut = <<
  private Integer foo(String bar, Frobnicator quux) {
    String frob = MOCK(quux.invokeMagic(bar)) AS("frob");
    return MOCK(memberApi.transform(frob)) AS (frob.length() > 3);
  }>>;
  
  //continue testing cut()
The idea being, the compiler or whatever external tooling ensures that the original method in production code, and the inline-modified method in tests, are the same with respect to some equality/hashing function that always treats expressions "foo" and "MOCK(foo) AS(bar)" as equal.

This way, you end up being able to mock everything precisely, inline, in whatever way you see fit, with your tooling ensuring that the actual code stays in sync with the test, since whenever the original method changes in any meaningful way, you'll fail the code "equality" test.

Might be a stupid idea, I welcome comments.

(INB4 testing to implementation instead of the interface - if you have to mock anything, you're already testing to implementation, and this way you can inject testing alterations precisely, instead of having to turn your architecture inside-out to support IoC/DI/whatever the current testing-enabling OOP fad is.)

YZF · 8 years ago
It's supposed to be this way but in practice it doesn't seem to work. With methods/functions it's a problem when they call other functions.

So really the idea that a good design is also testable is somewhat of an approximation of reality, It has some elements of truth but at some point making your code more testable actually makes it worse, often you need to make other compromises. Then the interactions of pieces of code typically happens in much more brittle system/integration tests. YMMV, this is just my experience.

A lot of interesting pointers in the replies, thanks!

tomelders · 8 years ago
I don’t do coding tests. I talk to the candidate about programming and I only require one interview. I haven’t made a hiring mistake in 6 years.
NotSammyHagar · 8 years ago
okay. care to give any other info? Usually when someone says I haven't failed in 'some impressive challenge in a long time' it means your challenge wasn't that impressive. How much interviewing have you done, what were the hard calls you were asked to me, whats your scheme?

When I was at google, one of the hardest thing I did was look at the marginal intern review scores and tried to pick out the ones from the big pile whom we should look at again and who should not be looked at. There were all kinds of crazy ass stupid interview questions that in my opinion were not very useful to classify capability.

closeparen · 8 years ago
Uh, this is about automated testing of programs, not interviewing.
35bge57dtjku · 8 years ago
I'd love to know how you do that, but the answer is always, "I just ask them the right questions and can tell by their answers," which is so vague it's useless.
lmm · 8 years ago
The "Language Gap" slide seems massively overstated, or maybe I'm misunderstanding. We really have seen a lot of progress in the last 10-20 years, both in industrial languages and in academic ideas that could become the industrial languages of the next 10-20 years (e.g. Idris on the short end, Noether on the longer end). The author laments that pattern-matching is still not standard, but we're getting there; map/reduce/filter are standard in all new languages these days (they weren't 10-20 years ago), some kind of lightweight record feature is standard in all new languages, some level of type inference is standard in all languages. Yes, it's taken longer than you might imagine it should, but progress is happening. Likewise formal methods - they may not be practical in 2016, but there's a lot more awareness, a lot more work being done, and people are starting to try to take the useful parts and apply them in more and more industrial settings. Likewise graphical representation of code - not the LabView nonsense that's exciting to talk about at cocktail parties, but the little touches that today's IDEs do almost invisibly - highlighting, mouseover information, outline views, smart code folding.

I wish we were better at communicating about programming languages. I wish we were moving faster. But despair is unwarranted. We really are in a much better place than 10-20 years ago, and the next 10-20 years look set to bring more improvements.

pjmlp · 8 years ago
10-20 years ago some of us could use Smalltalk, do systems programming with strong type safe languages, use RAD environments like Delphi, release applications in Prolog, for example.

To me it seems we are catching up with the past, and as someone already programming on those environments, it looks we have spent 10-20 years loosing our tools, educating the masses, only to get a taste of things used to be.

lmm · 8 years ago
I've certainly seen cases where we take one step back in one area to take two steps forward in another; where it takes 5-10 years to get language C that can do something that language A we were using 5-10 years before that could do - but only if we forget that we also wanted some capability in the language B that we couldn't do in A, and C is the first language that manages to synthesise both. And industry is always going to be a long way behind the cutting edge - most of the features we're excited about today are things that were present in ML. But on the whole it feels to me like both a) the mainstream industrial programming experience today is better than it was 10 years ago and b) the academic cutting edge of programming language design today is better than it was 10 years ago, and I expect both those things to continue to be true.
david927 · 8 years ago
The "Language Gap" slide seems massively overstated, or maybe I'm misunderstanding.

The language gap is relative to where he thinks we should be aiming for, what the author thinks is possible. If you think that we're already "there", then there is no gap for you, and there's nothing wrong with feeling that way.

Personally, I agree with the author: we can do vastly better. I think it's a failure of imagination and effort, not potential.

BatFastard · 8 years ago
From my perspective as a developer for the last 30 years, it's been two steps forward, and 1.5 steps back.

I think JavaScript does some amazing things, but it reminds me of Visual Basic in the 90s or Flash in the 2000s. Anyone could write code (some good, more horrible) with it and do cool stuff.

But the maintainability is horrible. I hope TypeScript comes to the rescue!

aryehof · 8 years ago
My thoughts are that we don't really need more languages. Arguably we don't need better ones either, because they aren't the problem in general computing. Instead we need better design paradigms that better let us model complex requirements and systems into code. Let's have new languages that then support those paradigms.

We continue to struggle abstracting complex problems using functional decomposition, structured analysis, information (data) modeling, and object-based decomposition.

Many newcomers I meet only know modeling problem domain concepts as data in a database, with behavior and constraints acting on that data in a separate layer, organized using functional decomposition. Of course that layer increasingly approaches a 'big ball of mud' as size and complexity increases. Sounds a lot like we are back data-flow modeling as so popular in the 1980's, in a new guise.

A focus on programming languages in my opinion, masks the real issues we face.

asavinov · 8 years ago
> A focus on programming languages in my opinion, masks the real issues we face.

Indeed, major problems of programming languages can hardly be solved in the area of programming languages itself as it is being done now.

I would say that one needs a new programming or computing model so it is not about languages. At least it is my conclusion after 10+ years of research and attempts to develop such a new programming paradigm. And although I have quite significant progress (concept-oriented programming), the more I do and the deeper I go, the more fundamental problems I meet. And these problems are not about programming languages at all. It is more about "how a system works", "what is a system", "what is computing" etc.

pdimitar · 8 years ago
Doing such a huge research has to have phases of some sort. At certain point you should stop, re-evaluate, and say "okay, where I am at now is good enough to solve problems X and Y, most of the time".

Otherwise it's endless and one loses motivation.

ShallowLearning · 8 years ago
Much of the time, new languages lead to new design paradigms. As a general rule, newer languages are more abstracted than older ones. When people don't need to get hung up on the intricacies of low level programming, real progress can be made on the design paradigm front.
asavinov · 8 years ago
I would say that it is a two-directional dependency:

  language constructs <--> design patterns
Programmers experiment with and accumulates various design patterns using existing languages. Then the most useful of them are implemented (frozen) as programming constructs. Then programmers experiment with these new programming constructs and come up with new design patterns. And so on.

In fact, these design patterns and language constructs come in (anti-phase) waves.

aryehof · 8 years ago
I'm curious which languages have resulted in which new design paradigms?
timthelion · 8 years ago
I agree, and from a different perspective, I would compare this to literature. In English literature we have many different paradigms: Victorian literature, Modernism, Post modernism, Post colonial ect. The language has to be expressive, and in being expressive, it can express any of these different paradigms. The fact that the vocabulary is shared between the "eras" of literature is only a bonus which makes authors more flexible and more able to experiment with actually new concepts, rather than simply reinventing the same old vocabulary.

We then have Chinese literature and Indian literature, and these have very different paradigms. Perhaps it is even hard to effectively translate from Chinese to English. But the actual variance within a given language is still greater than the variance between the languages.

And like human languages, the vocabulary is often arbitrary. In human languages a dog is called a dog not because it makes the sound "dog" and not because it looks like the letters 'd', 'o', and 'g' but for entirely arbitrary reasons of phonetic shift and arbitrary initial designation.

And in lisp you take the 'car' of a tuple and in Haskell you take the 'head' of a list. But the two concepts are very much the same. However, the distinction between continuation passing style (CPS) concurrency and the actor model can be expressed in either langauge just as well. And the distinction between CPS and the actors is FAR grater than the distinction between calling the basic function "head" or "car".

catnaroek · 8 years ago
In Haskell you don't ever take the `head` of a list, because `head` is an evil non-total function. Instead you pattern-match the list: in one branch, the head is given to you for free; in the other branch, there's no head at all.
lmm · 8 years ago
I don't think we've reached the limits of what can be done with better programming languages, just because there's already such a range in terms of what today's languages do - I really do think there are languages in use today that are multiple orders of magnitude better at general computing than other languages that are in use today (as at least nominally general-purpose languages).
Myrmornis · 8 years ago
I was going to say something which I think may be similar to what you’re saying.

Software and business systems are diagrammed with totally ad-hoc “flow charts”, bubble and arrow diagrams, and less ad-hoc sequence diagrams and UML diagrams. We need advances in formal ways to model concurrent processes, from the level of threads to concurrent business processes.

aryehof · 8 years ago
In a sense yes, although I suggest that a "business process" is too broad and difficult an abstraction. Better for most [1] systems in industry and commerce to elicit and model sequences of recorded events [2] involving interactions with things, people and places in different contexts, with support for constraints [3].

Modeling the above in a business sense, with support in existing paradigms and languages is already a solved problem, just a little known one.

---

[1] The other type of system in industry and commerce being the continuous system that isn't based on recorded events, but instead on logging errors and abnormal circumstances, e.g. an elevator control system or an automated warehouse delivery system, or the engine monitoring software in your car.

[2] Recorded for business or legal reasons.

[3] It's those constraints that prevent an elevator from moving with its doors open, or billing for that product not shipped, or allowing someone to vote twice.

js8 · 8 years ago
I would love to see programming as a dialogue between user and computer (programmer and compiler). For example:

Compiler would infer the types, and the programmer would read it and say, oh, I agree with this type, but I disagree with this type, that's perhaps wrong, this should be rather that type. Then the compiler would infer types again, based on programmer's incremental input.

Data structure selection. Programmer would say I want a sequence here. The compiler would say, I chose a linked list representation. The programmer would look over it, and disagree, saying, you should put this into an array. And compiler could say, look, based on measurements, array will save this much space but list will be this much faster.

Code understanding. Programmer should be able to say just, I don't know what happens here, and the compiler would include some debug code to show more information at that point.

Or take refactoring. Programmer would write some code, computer would refactor it to simplify it. Then programmer would look over it, and say, no, I rather want this, and don't touch it, and he would perhaps write some other code. The compiler would refactor again...

But all this requires that there is syntactically distinct way (so that perhaps editor could selectively hide it) to specify these remarks in the code, both for computer and programmer. So each of them should have a special kind of markup that would be updated at each turn of the discussion. Because you don't want to just overwrite what the other side has just said; both are good opinions (which complement each other - human understands the purpose of the code but the computer can understand the inner details much better). So, to conclude - I wish future programming language would include some framework like this.

bjz_ · 8 years ago
Programming in Lean, Agda, and Idris have been quite a revelation in terms of interactive type system exploration. Granted, they can be flakey at times (Lean especially), but it's a tantalizing glimpse at what could be around the corner. Hazel[1] is also a pretty exciting look at advancing the idea of 'programming with holes', as is Isomorph[2]. Lots of exciting things around the corner!

[1]: http://hazelgrove.org/

[2]: https://isomorf.io/

tom_mellior · 8 years ago
> Programmer would say I want a sequence here. The compiler would say, I chose a linked list representation. The programmer would look over it, and disagree, saying, you should put this into an array.

To some extent, this is the promise of object-oriented programming, that in this particular instance has failed a bit in mainstream languages. It's true that in, say, Java, you have to massively refactor your code to switch between arrays and linked lists, because you use different syntax ([] vs. method calls) to access elements. It could be a bit better in C++ due to operator overloading; you could hide your actual container type behind a typedef, and as long as both container type A and container type B support the same operations with [] that you actually use, you can freely switch between them.

In Smalltalk, every container class derives from a single Collection class, and they have very very similar APIs. There you can, to a large extent, just program against the Collection API and not care much about the actual type of collection you have in your hand. You still have to choose one! The compiler won't do it for you, not in the way you envision. But the idea is that if you program against the generic API, then profile/benchmark your code, it should suffice to change a single line of the program to try a different representation to compare against it. (Of course some things won't work. You can't index into a set.)

Other dynamic languages should be similar, Python for example. But I think you have to work harder to achieve full genericity.

marcosdumay · 8 years ago
The types line is what some people do with Haskell, Idris. I do personally favor writing the large-scale types beforehand, because that gives the compiler a chance of saying "look, you program is wrong", what is way more useful than "hey, your program has this type". Besides, abstract-type driven programming is an incredibly good methodology where it's applicable.

On code understanding, what makes it better than the programmer inserting the debug statements themselves? It saves some misunderstanding from the computer's part.

On refactoring, some IDEs do that. I'm on the fence about its usefulness.

js8 · 8 years ago
Thanks, I will respond to other people here as well.

I know a bit of Haskell and want to look at Idris, someday.

My point was, there should be a clean (best if even syntactic) separation between code itself (i.e. what should actually be done) and its properties (like types). Also, because there are two points of view about the properties (human and computer), this separation needs to be there twice (so for example, each type could be specified by computer and by human). I haven't seen a system that would do it, on a systematic level in the programming language. I only gave examples to show where it could be used.

nadagast · 8 years ago
I've been thinking a lot about nearly these exact same things. We desperately need better ways to deal with derived textual data. Why do we make the programmer guess which data structure will work best for a particular task, when we could easily try it each way and record the performance, and pick the best? A big part of the reason must be that we have no good strategy for storing that data and making that choice in an ongoing and living way, inside of a code repository. We suck at dealing with derived data on the meta layer above our programming languages.

Email me at glmitchell[at]gmail if you want to chat more about this.

peoplewindow · 8 years ago
I am curious if you've tried IntelliJ with Java or Kotlin, with the various mod cons applied? Because it's quite similar to what you're asking for.

Kotlin does type inference. You can see what the inferred type is if you enable inline type hints. It's not a part of the source code, but it looks as if it is (modulo styling). As you work with the code you can see the inferred type change in real time.

OK, data structure selection, it won't help you pick between a linked list and an array. That said, I'm not sure that feature would bring much benefit to most programs. You almost always want arrays.

Code understanding: if you're unsure what's going on at a particular point, you can just add a breakpoint and run the program. You can then explore the contents of the program, evaluate arbitrary expressions, you can add "evaluation breakpoints" that print out the values of those expressions without stopping the app (on the fly ad hoc logging, in effect), you can investigate how the code you're looking at relates to other code, what the data flows are, what the location in the type hierarchies are, etc. There's a lot of ways to look at the program.

Refactoring; this is the point I went down this train of thought. Because a modern advanced IDE like IntelliJ can do this sort of thing already. It can do things like spot code duplication and fix it for you by extracting a method, in real time, with a single keypress triggered by subtle visual hints like a soft wavy underline. It can convert code between imperative for-loops and functional pipelines of map/filter/fold/etc, in both directions. It can identify and automatically delete unused variables, function parameters, object properties and so on. "Simple" is somewhat in the eye of the beholder, but it's a pretty close realisation of what you seem to be asking for.

The dialogue is not had through markup in the code but rather, through the IDE giving its suggestions using visual hints, and the user starting an interaction through a keypress that brings up a menu of suggestion options ("intentions"), which may in turn lead to more options and so on.

currymj · 8 years ago
wrt your first paragraph there, I urge you to take a look at Idris. That's the exact workflow.
mrkgnao · 8 years ago
> Compiler would infer the types, and the programmer would read it and say, oh, I agree with this type, but I disagree with this type, that's perhaps wrong, this should be rather that type. Then the compiler would infer types again, based on programmer's incremental input.

This is possible today, at least in Haskell. (and I'd guess in OCaml too ... surely also Scala?)

Your proposal for adaptive data structure selection based on benchmarking is intriguing!

samth · 8 years ago
This talk is right that effect systems aren't popular yet, except the way Haskell does them. But it's wrong about the trajectory of languages. Right now is the best time to be interested in using cutting edge languages in practice. Also right now has seen an explosion of new languages with interesting ideas, from Rust to Purescript to Elm (in the authors preferred realm of typed languages). Also industry is backing major post-Java languages like F# and Rust.

In short, the near future of PL is great, and exciting stuff keeps happening. Don't believe the naysayers.

TuringTest · 8 years ago
> Lots of people are reinventing Smalltalk on a Mac. (See Bret Victor and Eve).

At last, someone noticed! ;-)

Though from that expression, what the author doesn't seem to grok is why having a Smalltalk-like environment is desirable; maybe not as the primary way to program computers but certainly as a tool alongside.

It's a shame that a family of programming languages that build on and expanded that model hasn't gained more traction in the industry (not necessarily for people who dedicate their lives to build complex software with a highly general programming language, but for the rest of us).

vanderZwan · 8 years ago
I find the jab at Bret Victor especially undeserved. He's an interaction designer (a really good one who sees through all the fads[0], which is kind of the opposite of what this one-liner implies). His focus is on better interface design, not formal language design; why criticize someone for something they're not trying to do?

And it's not useless; we probably wouldn't have had Elm without Bret Victor's Inventing on Principle[1][2]. And there has been some progress in that direction of interface design of, for a lack of a better term, "programmable environments": look at Apparatus, for example[3]. Where would you even fit that on these slides?

[0] http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesi...

[1] http://elm-lang.org/

[2] https://vimeo.com/36579366

[3] http://aprt.us/

TuringTest · 8 years ago
Quite true. I find likely that the next revolution in programming languages will come from designing a PL that's a good fit for these programmable environments. Maybe it should break from current undisputed conventions like the radical separation between "data" and "source code", and be more like a spreadsheet.

End-User Development has lots of under-explored ideas on how to build software automatisms that don't require the end user to learn a hard formalism (even if such formalism exists as the basis for the system). Though I understand that programming language theorists are not interested in that angle of the evolution of PLs.

fnord123 · 8 years ago
Notebook environments have a lot of traction in industry. Is that not in line with what you're hoping for?
TuringTest · 8 years ago
Yes, but they are only available for a few languages, and they're not a tool that is recognized as beneficial to programming in the large, like for example IDEs are.

Moreover, having direct introspection of a model stored in a local notebook is still quite limited with respect to having it in the whole environment like Smalltalk or HyperCard did. There are a few systems trying to explore that "structure of the project" approach, like Leo Editor or the Smallest Federated Wiki which could be a better basis for a "Inventing on principle" tool.

stmw · 8 years ago
This is a great presentation, I wish I could hear the talk track. While we can all differ on the right "winner" for the next programming language (I don't think Clojure is the right answer, someone else might), we are all stuck with the same set of facts - and this covers the state of things very well. Most importantly, it explains certain truths of the social/economic ecosystem for programming languages - which is what gave us Java, Python, Javascript, and a few other really popular systems that seemed unlikely to succeed when they first appeared. The reasons for their success have just as much to do with "ecosystems" than with language features.
saas_co_de · 8 years ago
On twitter he says:

"The thesis was that programming in 2030 will have very advanced research languages but mainstream languages will effectively stop advancing and we'll be last [sic - left?] with a vast insurmountable gap between the two."

That provides some context for what the slides are about.

makach · 8 years ago
my thoughts exactly. it's difficult to understand this presentation without the discussion that goes with each slide. it's too easy to just be biased about the meaning of each slide;