I discovered Sketch on the OCaml forum just after I finished the alpha version of domical. I'll publish domical after some issues coming from iodide are corrected.
Sketch is really awesome, and serves different purposes. Domical is just an interpreter for the iodide notebooks, which are designed to be portable and execute all kind of languages: js, python (those crazy folks compiled the full interpreter and f numpy in WASM) and now OCaml.
I can see that sketch has really a lot more features.
A key feature of iodide is its simplicity and portability: the whole notebook is written in a readable format looking like markdown and called jsmd. Take a look at the source code of the page to better understand.
So definitely we have a lot to share on our respective projects, even if I am not one of the main contributors of iodide (I just did the OCaml part).
If you want to continue this discussion, my email is on my homepage :)
Of course, I put the "`" to indicate the interpreter command. The compiler is called with `ocamlc`.
The online version is basically a top-level exucuted by the js engine of your browser, and it is more performant than its equivalent executed natively.
What's the current state of OCaml on Windows? I'd love to use OCaml on more of my projects, but this has been a roadblock for me.
What I'm looking for is the ability to produce native binaries without Cygwin. I just checked without installing anything and it appeared as though OCPWin was still the choice, but it's been stuck on OCaml version 4.01.0, which was released five years ago. I know that OPAM is pushing toward a 2.0 release very soon and I didn't know if that was going to coincide with a stable OCaml/OPAM release on Windows, which would fix my problems. Does anyone have an idea?
I'm looking forward to the day when I won't need Cygwin even in the build environment. Since the OCaml compiler itself works fine on Windows and modern build systems like "dune" are also Windows-friendly I'm fairly optimistic this can happen soon. I think it'll mostly be a matter of removing accidental Unix-isms (like unnecessary use of symlinks) in the build scripts.
Mostly, I'd like to ensure that I don't like the dll, so that I don't have to attempt to distribute it. More selfishly, I'd like to have a straightforward installation process where I pull down only a binary or two and can have a working environment and the ability to integrate additional packages.
I didn't know about dune. Looks neat. Is this meant to be used in conjunction with opam?
LexiFi use OCaml on windows in the industrial environment (they even have a .Net integration), so the language is stable. Many packages are working fine on windows too. opam is building already I think, but is not yet stable, you could ask them on github tho. The right way to install ocaml is [1]. I don't think you can avoid cygwin completely since you need rsync and similar stuff for some packages, tho the produced binaries do not require cigwin.
I've thought about it and wrote a bunch of trial software to test things. Primarily, I work on Linux even though I support Windows. There are also some features in OCaml that I like, which are not available on F# such as the module system, polymorphic variants, and, in the past, camlp4. While these can be worked around, the other use case is running the software on an HPC system using MPI. For me, the use of OCaml or F# is a glue, so I'm not worried about the performance. I am a little worried that the HPC administrators are going to have some difficulty getting the correct setup for F# across all the nodes. For OCaml I don't view this as much of a problem since it just produces binaries. Now, I absolutely could be wrong about this and I'd appreciate if someone where to chime in if so.
All nice and dandy, but please, for the sake of everything, can this site just stop auto advancing on steps?!
I click the last one and advance a step before I can comprehend what this command did or read any context.
Breaks flow completely... And I'm just able to be fine advancing manually once I feel I'm done with the current step.
Other nits:
* show the ;; in the step code snippets, at least at the beginning - else you may be confused fast if you hack around in the "terminal"
* add a 'run this snippet now' button to the code snippets, a small green triangle or whatever those IDEs are using nowadays, allows to copy of examples either modifying before execution or - like me - paste them in a "real" OCAML REPL I have on my desktop.
don't get me wrong, the site and idea is nice, those things just irritate me, almost irrationally, from an UX POV.
I know cryptocurrency is contentious in this community so apologies for injecting it here:
OCamlPro is heavily involved in the development of Tezos and a big part of the reason I invested in it. One of the main selling points of Tezos is formal verification, which OCaml (being a pure-ish functional language) was instrumental for.
Regardless of the future of crypto (fad or world changing) if it's handling money or other assets it should be done safely. This is step in that direction.
It was __very__ amusing to play with Variants, Pattern Matching and Units. If programm compiles - it almost always just works.
It's an indispensable tool to play around and get more ideas how to write a better programs.
And it was disastrous in __everything__ else.
Installation and compiler bugs. Terrible tooling. Terrible code packaging. Terrible Javascript interop. Painfully archaic syntax. Sometimes extremely esoteric compiler errors.
After a week i thrown out all the code and rewritten it using Typescript. Won't ever use new languages on projects with deadlines, no matter how sweet evangilists talks are.
I would encourage you to try it again. There were some big changes in ReasonML in the last year.
I've picked up ReasonML about 3 months ago and experienced almost none of the issues you described. It's still a little rough around the edges compared to for example Elm, but the tooling, packaging, Javascript interop etc are really good IMO.
We have built a design-to-code tool (Sketch to clean HTML & CSS) in Reason[1], and it has been the best experience ever in my 10 years of programming.
We even used Reason for writing ObjectiveC/Cocoa through CocoaScript (a Javascript-to-ObjC bridge), thanks to Reason/BuckleScript's excellent Javascript interop.
The language and compiler is rock solid - it has to be because it has been worked on for over 40 years from the days of ML. For the last three years a team of people including Hongbo Zhang, Jordan Walke, Cheng Lou and others have been actively working on Javascript tooling for OCaml, and it is an absolute pleasure to use today.
> If programm compiles - it almost always just works.
I can never understand this claim when people make it about languages like Haskell and ReasonML.
How is the compiler catching your logic bugs? If you write 'a + b' and you should have written 'a - b' the compiler isn't going to catch that in either of those languages. It'll compile but it won't work. Do you never make these kind of logic bugs?
The type system is a lot more expressive than the ones found in Java/C#/C++, thanks to Algebraic Data Types.
Yaron Minsky (Jane Street, largest OCaml user) coined the term "make invalid states unrepresentable" to explain how ADTs help in enlisting the help of the compiler to help us express our intent correctly, and Richard Feldman (NoRedInk, largest Elm user) has given an absolute gem of a talk that dives into that idea with relatable, concrete examples.
I've done a lot of work in Python and a fair amount of ocaml and rust.
I find that in Python - probably any dynamic language - a lot of the tests I write are just to make sure the pieces fit together. It's too easy to write well tested units that don't work together because of a misplaced argument or something at the interface. That type of testing is largely not needed with static typing.
Also, when I create a data structure in Python my mental model of what it means and how it relates to other parts of the program can be kind of fuzzy. Which means the pieces may not fit like I think they do because I missed a detail.
But a strong static type system forces me to clarify my thinking much earlier in the coding process than in a dynamic language. It encourages me to encode more of my mental model into the code and that allows the compiler to double check me.
I believe this claim to have two more narrow meanings:
1) If it compiles, you won't get a "undefined is not a function".
2) If it worked, you changed something and looked at all the places compiler told you to look at, all the code it didn't tell you to look at is very likely to continue to work corrrectly.
(1) is a very convenient feature and easy to advertise, but (2) is what makes people jump around and tell that everyone should use ML: it allows to be not afraid of changing something in the first place.
There is a phrase from Yaron Minsky about OCaml, “make illegal states unrepresentable”, which summarize this.
It's more about programming declaratively, functional, as well as strong types which allow you to be very expressive. My experience with F# is similar, if you can get you code compiled, it almost always just works as you expect. But this doesn't mean that you can avoid writing tests though.
I think this something that needs to be experienced firsthand a few times for this to sink in. It something like the backpropagation update of a neural network: no matter how long you stare at it, it wont be clear, but if you derive it yourself it becomes super clear.
I can say that this effect seems very real. I think the reason goes beyond those shallow type mismatch errors I often make. Part of the reason, I think is, the correct program becomes clearer to write compared to the erroneous programs you could have written if you use the type system. In other words types can make some of the erroneous versions cumbersome, or at time, impossible to write -- but good luck mud wrestling with the compiler till its convinced.
I will strongly encourage that you try it and dont be discouraged. This really needs to be experienced first hand to be convinced.
The more meaning you can encode in the type system, the more the compiler can help you.
It won't help you catch a + b... unless you e.g. declared a to be of type meters, and b inches.
I think that statement is better viewed in lights of refactoring or changing existing code.
If the code worked/compiled before, and I made a change that still compiles, I can be reasonably confident it didn't break something else accidentally (assuming the program was reasonably well designed to begin with).
It makes reviewing pull-requests much easier.
Doing so in other programming languages always makes me nervous, in C an innocuous change could introduce some memory corruption bug because you broke an implicit invariant of the code, in Python it could raise a type error when called in an unexpected way.
To review a pull-request you have to look usually at the entire file (or sometimes the entire project), and even then you can't be sure.
As for the logic bugs: that is what tests are for.
The compiler does not catch your logic bugs. But stronger, more expressive type systems catch syntax bugs and structural errors like you wouldn't believe. It's not an all or nothing.
I have spent an embarrassing amount of time troubleshooting a < that should have been a > (in F#) where it actually resulted in a very subtle defect. I'm not aware of a type system that works have prevented this and yet,I do strongly agree with the assertion that in more strongly/statically typed language, "compiling successfully = very likely to run correctly."
It kind of applies to those languages. It more applies to languages like SPARK or Eiffel. But yeah like other posters have said, it's more about avoiding runtime errors.
It doesn't, and I agree with your sentiment. IIRC, type-related errors account for ~10% of bugs.
If (and only if) types are tied into your tooling ecosystem (for example, an IDEs that uses that info to full extent to aid in code completion, refactoring, code analysis etc.), they are very useful.
You might want to try Rust. It has a lot of the ML niceties but with a modern syntax and pretty good tooling. I wouldn't recommend anything other than JavaScript/TypeScript for frontend stuff though.
I am writing Rust more or less daily for my research code (in machine learning & natural language processing), but be prepared to implement a lot of functionality yourself or bind C libraries. The numerical computing/machine learning ecosystem is still very young/small on Rust.
> I wouldn't recommend anything other than JavaScript/TypeScript for frontend stuff though.
My assumption was that anything else would be painful, but I tried some ScalaJS after one too many frustrations with TypeScript and was pleasantly surprised at how easy it all was.
I just finished a ReasonReact project which I started about 3 month ago and I found the experience quite nice (working with VSCode). ReasonML is still in an early stage, but I prefer it over JS/TS for React frontends. The language is a great fit for React. The only thing that I was missing was the amount of Q&A that I'm used to from JS/TS.
Haskell and OCaml try their best to not be popular. Elm is very beginner friendly, docs, installation, great libs for each purpose and is like 5 years old. OCaml has more than 20 years old and for a beginner to install everything and make a REST API is a pain. Why that happens?
>OCaml has more than 20 years old and for a beginner to install everything and make a REST API is a pain. Why that happens?
Nowadays things are quite neat with opam/dune/odig stuff. You can install packages, generate readable docs for them, just like in rust. For the rest api you could just take any existing framework, I think, e.g. [1] [2].
You should also consider that OCaml is a language, which for a long time was intended for a different kind of projects (yeah, webdev is not the only programming domain, shocking), so the direction was dictated by Coq and similar project devs rather then webdevs. Now facebook and other companies change the state of affairs.
I am usually not one for dismissing a language for syntax alone, but, the syntax for arrays versus lists feels very cumbersome. Can an experienced OCaml developer tell me how common it is to use lists versus arrays?
I can only imagine how many bugs I would type at this stage of my career after being used to [] for arrays but forgetting the pipe, and getting a list instead. It's also very difficult to type.
> I can only imagine how many bugs I would type at this stage of my career after being used to [] for arrays but forgetting the pipe, and getting a list instead.
Luckily, the compiler would catch this anywhere you tried to use a list where an array was expected.
> how common it is to use lists versus arrays?
Lists are far more common in day-to-day usage. The language even has native syntax for list consing and destructuring, which is useful for pattern matching.
OCaml arrays do have their place though, if you have a fixed number of items and need random access. Float arrays are also specialized, their elements are unboxed and contiguous which helps with performance for certain types of number-crunching applications.
I was thinking quite the opposite, I admit I don't know much about OCaml yet but I was very interested in trying ReasonML. Anyways, I would think you would use an array when you need sequential access and potentially help locality of reference when doing some number crunching? OCaml arrays are random access?
Well, any bugs you would get are very immediate (due to the strong static typing). As for your other point, lists are fairly rare. You only need arrays when you need arbitrary indexing into a list. Most of the time, however, you're simply iterating through the list in some fashion.
What do you think?
Sketch is really awesome, and serves different purposes. Domical is just an interpreter for the iodide notebooks, which are designed to be portable and execute all kind of languages: js, python (those crazy folks compiled the full interpreter and f numpy in WASM) and now OCaml.
I can see that sketch has really a lot more features.
A key feature of iodide is its simplicity and portability: the whole notebook is written in a readable format looking like markdown and called jsmd. Take a look at the source code of the page to better understand.
So definitely we have a lot to share on our respective projects, even if I am not one of the main contributors of iodide (I just did the OCaml part).
If you want to continue this discussion, my email is on my homepage :)
If you want to try F# w/o any installation, try it in Jupyter on Azure Notebooks:
https://notebooks.azure.com/Microsoft/libraries/samples/html...
Browse w/o loggin in, or if you want to edit/run/..., sign in, click Clone, then play around.
It's all free.
Cheers.
Maybe you could word it as native `ocaml` interpreter instead? Ocaml can compile to machine code.
"The OCaml native-code compiler, version 4.07.0"
The online version is basically a top-level exucuted by the js engine of your browser, and it is more performant than its equivalent executed natively.
It had -- in 2.5s :-)
For the performance we really have to thank the folks of js_of_ocaml!
[1] https://github.com/rizo/awesome-ocaml
What I'm looking for is the ability to produce native binaries without Cygwin. I just checked without installing anything and it appeared as though OCPWin was still the choice, but it's been stuck on OCaml version 4.01.0, which was released five years ago. I know that OPAM is pushing toward a 2.0 release very soon and I didn't know if that was going to coincide with a stable OCaml/OPAM release on Windows, which would fix my problems. Does anyone have an idea?
I'm looking forward to the day when I won't need Cygwin even in the build environment. Since the OCaml compiler itself works fine on Windows and modern build systems like "dune" are also Windows-friendly I'm fairly optimistic this can happen soon. I think it'll mostly be a matter of removing accidental Unix-isms (like unnecessary use of symlinks) in the build scripts.
I didn't know about dune. Looks neat. Is this meant to be used in conjunction with opam?
https://fdopen.github.io/opam-repository-mingw/
Deleted Comment
I click the last one and advance a step before I can comprehend what this command did or read any context. Breaks flow completely... And I'm just able to be fine advancing manually once I feel I'm done with the current step.
Other nits:
* show the ;; in the step code snippets, at least at the beginning - else you may be confused fast if you hack around in the "terminal"
* add a 'run this snippet now' button to the code snippets, a small green triangle or whatever those IDEs are using nowadays, allows to copy of examples either modifying before execution or - like me - paste them in a "real" OCAML REPL I have on my desktop.
don't get me wrong, the site and idea is nice, those things just irritate me, almost irrationally, from an UX POV.
OCamlPro is heavily involved in the development of Tezos and a big part of the reason I invested in it. One of the main selling points of Tezos is formal verification, which OCaml (being a pure-ish functional language) was instrumental for.
Regardless of the future of crypto (fad or world changing) if it's handling money or other assets it should be done safely. This is step in that direction.
OCaml is also used at Bloomberg and Jane Street.
It's a pity that functional programming and formal verification aren't enough to overcome selfish and destructive founder behavior.
It was __very__ amusing to play with Variants, Pattern Matching and Units. If programm compiles - it almost always just works. It's an indispensable tool to play around and get more ideas how to write a better programs.
And it was disastrous in __everything__ else. Installation and compiler bugs. Terrible tooling. Terrible code packaging. Terrible Javascript interop. Painfully archaic syntax. Sometimes extremely esoteric compiler errors.
After a week i thrown out all the code and rewritten it using Typescript. Won't ever use new languages on projects with deadlines, no matter how sweet evangilists talks are.
I've picked up ReasonML about 3 months ago and experienced almost none of the issues you described. It's still a little rough around the edges compared to for example Elm, but the tooling, packaging, Javascript interop etc are really good IMO.
We even used Reason for writing ObjectiveC/Cocoa through CocoaScript (a Javascript-to-ObjC bridge), thanks to Reason/BuckleScript's excellent Javascript interop.
The language and compiler is rock solid - it has to be because it has been worked on for over 40 years from the days of ML. For the last three years a team of people including Hongbo Zhang, Jordan Walke, Cheng Lou and others have been actively working on Javascript tooling for OCaml, and it is an absolute pleasure to use today.
[1] Protoship Codegen - https://youtu.be/hdww16KK8S8
I can never understand this claim when people make it about languages like Haskell and ReasonML.
How is the compiler catching your logic bugs? If you write 'a + b' and you should have written 'a - b' the compiler isn't going to catch that in either of those languages. It'll compile but it won't work. Do you never make these kind of logic bugs?
Yaron Minsky (Jane Street, largest OCaml user) coined the term "make invalid states unrepresentable" to explain how ADTs help in enlisting the help of the compiler to help us express our intent correctly, and Richard Feldman (NoRedInk, largest Elm user) has given an absolute gem of a talk that dives into that idea with relatable, concrete examples.
It is titled "Making Impossible States Impossible" and should clarify this concept nicely. https://www.youtube.com/watch?v=IcgmSRJHu_8
I find that in Python - probably any dynamic language - a lot of the tests I write are just to make sure the pieces fit together. It's too easy to write well tested units that don't work together because of a misplaced argument or something at the interface. That type of testing is largely not needed with static typing.
Also, when I create a data structure in Python my mental model of what it means and how it relates to other parts of the program can be kind of fuzzy. Which means the pieces may not fit like I think they do because I missed a detail.
But a strong static type system forces me to clarify my thinking much earlier in the coding process than in a dynamic language. It encourages me to encode more of my mental model into the code and that allows the compiler to double check me.
It's kind of like spell check but for design.
1) If it compiles, you won't get a "undefined is not a function".
2) If it worked, you changed something and looked at all the places compiler told you to look at, all the code it didn't tell you to look at is very likely to continue to work corrrectly.
(1) is a very convenient feature and easy to advertise, but (2) is what makes people jump around and tell that everyone should use ML: it allows to be not afraid of changing something in the first place.
It's more about programming declaratively, functional, as well as strong types which allow you to be very expressive. My experience with F# is similar, if you can get you code compiled, it almost always just works as you expect. But this doesn't mean that you can avoid writing tests though.
This is a good read about it, https://fsharpforfunandprofit.com/series/designing-with-type... by Scott Wlaschin.
"Modern" dev is not about writing algorithms, it's about sewing together many different frameworks/libraries/technologies.
I'm probably biased as most of my experience is in webdev, but other domains seem to have the same issues (probably not to the same extend).
I can say that this effect seems very real. I think the reason goes beyond those shallow type mismatch errors I often make. Part of the reason, I think is, the correct program becomes clearer to write compared to the erroneous programs you could have written if you use the type system. In other words types can make some of the erroneous versions cumbersome, or at time, impossible to write -- but good luck mud wrestling with the compiler till its convinced.
I will strongly encourage that you try it and dont be discouraged. This really needs to be experienced first hand to be convinced.
However in languages with rich type systems you can do Type Driven Development, which reduces the area space for many kind of errors.
Basically a simplified version of dependent typing, where one makes use of the type system to represent valid states and transitions between them.
For example, one can use ADTs to represent file states, with the file operations only compiling if the file handle is in the open state.
Doing so in other programming languages always makes me nervous, in C an innocuous change could introduce some memory corruption bug because you broke an implicit invariant of the code, in Python it could raise a type error when called in an unexpected way. To review a pull-request you have to look usually at the entire file (or sometimes the entire project), and even then you can't be sure.
As for the logic bugs: that is what tests are for.
I have spent an embarrassing amount of time troubleshooting a < that should have been a > (in F#) where it actually resulted in a very subtle defect. I'm not aware of a type system that works have prevented this and yet,I do strongly agree with the assertion that in more strongly/statically typed language, "compiling successfully = very likely to run correctly."
It doesn't, and I agree with your sentiment. IIRC, type-related errors account for ~10% of bugs.
If (and only if) types are tied into your tooling ecosystem (for example, an IDEs that uses that info to full extent to aid in code completion, refactoring, code analysis etc.), they are very useful.
http://www.arewelearningyet.com
Though I am pretty confident that Rust will get there in a few years.
My assumption was that anything else would be painful, but I tried some ScalaJS after one too many frustrations with TypeScript and was pleasantly surprised at how easy it all was.
Nowadays things are quite neat with opam/dune/odig stuff. You can install packages, generate readable docs for them, just like in rust. For the rest api you could just take any existing framework, I think, e.g. [1] [2].
You should also consider that OCaml is a language, which for a long time was intended for a different kind of projects (yeah, webdev is not the only programming domain, shocking), so the direction was dictated by Coq and similar project devs rather then webdevs. Now facebook and other companies change the state of affairs.
[1] http://opam.ocaml.org/packages/eliom/
[2] http://opam.ocaml.org/packages/webmachine/
I can only imagine how many bugs I would type at this stage of my career after being used to [] for arrays but forgetting the pipe, and getting a list instead. It's also very difficult to type.
Luckily, the compiler would catch this anywhere you tried to use a list where an array was expected.
> how common it is to use lists versus arrays?
Lists are far more common in day-to-day usage. The language even has native syntax for list consing and destructuring, which is useful for pattern matching.
OCaml arrays do have their place though, if you have a fixed number of items and need random access. Float arrays are also specialized, their elements are unboxed and contiguous which helps with performance for certain types of number-crunching applications.
I was thinking quite the opposite, I admit I don't know much about OCaml yet but I was very interested in trying ReasonML. Anyways, I would think you would use an array when you need sequential access and potentially help locality of reference when doing some number crunching? OCaml arrays are random access?