As a professional haskeller, I feel it necessary to point out for people in this thread who are less exposed to Haskell and who may be Haskell-curious...this is not what real-world commercial Haskell code looks like. To use a C analogy, I'd say it's closer to IOCCC entries than Linux kernel code.
Thanks for that. Having read the article, I was left with the overwhelming impression that I'd have solved it in a totally different way if I was trying in OCaml.
Briefly, I'd have started with an array which for each colour had an array containing the coordinate pairs for that colour. I'd probably then have sorted by length of each array. The state also has an empty array for the coordinates of each placed queen.
To solve, I'd take the head array as my candidates, and the remaining array of arrays as the next search space. For each candidate, I'd remove that coordinate and anything that was a queen move from it from the remaining arrays, and recursively solve that. If filtering out a candidate coordinate results in an empty list for any of the remaining arrays, you know that you've generated an invalid solution and can backtrack.
At no point would I actually have a representation of the board. That feels very imperative rather than functional to me.
To me, this solution immediately jumps out from the example - one of the queens in on a colour with only 1 square, so it HAS to be there. Placing that there immediately rules out one of the choices in both colours with 2 squares, so their positions are known immediately. From that point, the other 2 large regions have also been reduced to a single candidate each.
As someone who has never touched Haskell and who knows nearly nothing about it, Haskell is not, in fact, a "dynamically typed, interpreted language", which, "has no currying".
This gives me a message "Unavailable Due to the UK Online Safety Act" which sounds like nonsense for a blog post, but IANAL. Can anyone summarise the post, or suggest why there'd be a reason my online safety is compromised by it?
It's pretty clear that the issue is not the post, but the fact that you are in UK, and the site author does not deem you important enough.
The site author himself has blocked users from the UK because of that stupid law that you cite in your comment:
"The UK's Online Safety Act requires operators of 'user to user services' to read through hundreds (if not thousands) of pages of documentation to attempt to craft "meaningful" risk assessments and 'child access assessments' or face £18,000,000 fines, even imprisonment."
I set this as part of a Scala programming assignment for my second year undergraduate class at UNE (Australia) last term. However, during the working a square is not Queen | Eliminated but Set[Queen | NotQueen]
Largely so from a programming perspective it becomes a simplified version of Einstein's Riddle that I showed the class, doing in a similar way.
thanks for saying this, that is how I play this game usually, and I was confused by TFA going with backtracking/guessing a next attempt, when constraint propagation seems easier, I thought I was missing something.
Just when I start thinking I am smart, someone drops this :) Haskell certainly looks graceful but is imposing! I feel pretty good if I can do functional stuff in Rust, but this is next level.
I think it's helpful to separate out the "Haskell is imposing" from "This language that I don't know is imposing". It is unrealistic to expect to be able to read a language you don't know. It is easy to be accidentally trained otherwise, because there are several languages out there that are hardly different from each other except in unimportant details and in a perfect world perhaps we wouldn't need both of them, and if you work with a couple of them you might get used to being able to read across languages, but those are really the exceptions rather than the rule. Even if they are very, very large and popular exceptions.
I won't say this reduces the "Haskell is imposing" to zero, but a non-trivial amount of the initial impression of imposingness is just the very different syntax, such as the way functions are not called with parentheses after the function name. But the different syntax isn't really that big a deal. You just don't know it and aren't used to it. Under the hood it does have some differences, but the differences are magnified when you try to swallow the surface differences and the deep differences all in one shot. Nobody who knows Haskell did that; they learned it the same way you learn any other language, one bit at a time.
I believe I first encountered this problem while working through SICP. Was confused why it's called Linkedin queens, since this problem is a classical one in CS that definitely pre-dated the existence of Linkedin.
LinkedIn's daily Queens puzzle is different than the classical problem in two ways: it adds colored regions which also must have exactly one queen, and it relaxes the diagonal constraint -- queens are allowed in the same diagonal as long as they are not adjacent.
1) It's not too hard to make a problem with at least one solution (just put the queens down first, then draw boxes), but there isn't any good way of making levels with unique solutions.
2) Once you've accomplished that, it's hard to predict how hard a level will be, and then it's hard to make levels easier / harder.
I happen to be currently researching this topic (well, I'm doing all kinds of these grid-based puzzles, but this is an example). The algorithm tries to make "good" levels, but there is a good probability it will end up with something useless we need to throw away, and then try again.
It's easy to make levels which are trivial, and similarly easy to make levels which are far beyond human ability, but hitting things in the 'human tricky but solvable' sweet-spot is where most of the difficulty comes from.
I should probably try writing up a human-readable version of how I do it. It involves a bunch of Rust code, so I can hit a whole bunch of trendy topics!
Given that this could be a variant of "exact cover", using zdds to explore the problem space might simplify finding exact puzzles in addition to puzzles that require lookahead.
> I should probably try writing up a human-readable version of how I do it. It involves a bunch of Rust code, so I can hit a whole bunch of trendy topics!
Generally what is needed is a subroutine that can tell you 1) if the problem has a solution, and 2) if the solution is unique (common requirement for puzzles like these). Using such a model as a sub-routine, a heuristic search can be done to gradually build up puzzles. If your solver technology of choice can handle quantified problems, that could be used to integrate those two problems into one, but that is quite a lot harder to to.
If the base solver you have is a system that can be run in various configurations with different levels of reasoning and assumption as well as a report on the amount of search needed if any, that can be very useful as a way to measure the hardness. In Sudoku as a Constraint problem (https://citeseerx.ist.psu.edu/document?doi=4f069d85116ab6b4c...), Helmut Simonis tested lots of 9x9 Sudoku puzzles against various levels of propagation and pre-processing as a way to measure the hardness of Sudoku puzzles by categorizing them by the level of reasoning needed to solve without search. The MiniZinc model for LinkedIn Queens (https://news.ycombinator.com/item?id=44353731) can be used with various solvers and levels of propagation as such a subroutine.
Now, for production-level puzzle making, such as what King does for Candy Crush, the problems and requirements are even harder. I've heard presentation where they talk about training neural networks to play like human testers, so not optimal play but most human like play, in order to test the hardness level of the puzzles.
It's the same problem as with generating good sudoku boards. It's not easy, and there's not many publicly available solutions, but solutions exist.
A common opinion is that a good board is solvable without the use of backtracking. A set of known techniques should be enough to solve the board. To validate if a board is "fun" you need to have a program that can solve the board using these known techniques. Making that program is much harder than just making a general solver. And then you need to find the boards that can be validated as fun. Either you search through random boards, or you get clever...
I've noticed that puzzles that can be solved with CP-SAT's presolver so that the SAT search does not even need to be invoked basically adhere to this (no backtracking, known rules), e.g.:
#Variables: 121 (91 primary variables)
- 121 Booleans in [0,1]
#kLinear1: 200 (#enforced: 200)
#kLinear2: 1
#kLinear3: 2
#kLinearN: 30 (#terms: 355)
Presolve summary:
- 1 affine relations were detected.
- rule 'affine: new relation' was applied 1 time.
- rule 'at_most_one: empty or all false' was applied 148 times.
- rule 'at_most_one: removed literals' was applied 148 times.
- rule 'at_most_one: satisfied' was applied 36 times.
- rule 'deductions: 200 stored' was applied 1 time.
- rule 'exactly_one: removed literals' was applied 2 times.
- rule 'exactly_one: satisfied' was applied 31 times.
- rule 'linear: empty' was applied 1 time.
- rule 'linear: fixed or dup variables' was applied 12 times.
- rule 'linear: positive equal one' was applied 31 times.
- rule 'linear: reduced variable domains' was applied 1 time.
- rule 'linear: remapped using affine relations' was applied 4 times.
- rule 'presolve: 120 unused variables removed.' was applied 1 time.
- rule 'presolve: iteration' was applied 2 times.
Presolved satisfaction model '': (model_fingerprint: 0xa5b85c5e198ed849)
#Variables: 0 (0 primary variables)
The solution hint is complete and is feasible.
#1 0.00s main
a a a a a a a a a a *A*
a a a b b b b *B* a a a
a a *C* b d d d b b a a
a c c d d *E* d d b b a
a c d *D* d e d d d b a
a f d d d e e e d *G* a
a *F* d d d d d d d g a
a f f d d d d d *H* g a
*I* i f f d d d h h a a
i i i f *J* j j j a a a
i i i i i k *K* j a a a
Together with validating that there is only 1 solution you would probably be able to make the search for good boards a more guided than random creation.
Simulated annealing with a difficulty heuristic (like minimum required logical steps) works well - start with a valid solution, then randomly modify colors while maintaining uniqueness.
Awesome, I'm writing a "logical solver" just like this - I'll hopefully also have something to post here when I'm done.
I'm trying to use it during the generation process to evaluate the difficulty a basic heuristic I'm trying to work with is counting the number of times a particular colour is eliminated - the higher the count the harder the problem since it requires more iteration of the rules to solve. (A counter example to this would be a board with 1 colour covering everything except the cells a queen of the other colours needs to be placed on)
Also I'm trying to evaluate the efficacy of performing colour swaps but it's proving more challenging than I thought. The basic idea is you can swap the colours of neighbouring cells to line up multiple colours so there are less obvious "single cells" which contains the queen. The problem with this is it can introduce other solutions and it's difficult to tell whether a swap makes the puzzle harder or simpler to solve.
Briefly, I'd have started with an array which for each colour had an array containing the coordinate pairs for that colour. I'd probably then have sorted by length of each array. The state also has an empty array for the coordinates of each placed queen.
To solve, I'd take the head array as my candidates, and the remaining array of arrays as the next search space. For each candidate, I'd remove that coordinate and anything that was a queen move from it from the remaining arrays, and recursively solve that. If filtering out a candidate coordinate results in an empty list for any of the remaining arrays, you know that you've generated an invalid solution and can backtrack.
At no point would I actually have a representation of the board. That feels very imperative rather than functional to me.
To me, this solution immediately jumps out from the example - one of the queens in on a colour with only 1 square, so it HAS to be there. Placing that there immediately rules out one of the choices in both colours with 2 squares, so their positions are known immediately. From that point, the other 2 large regions have also been reduced to a single candidate each.
The site author himself has blocked users from the UK because of that stupid law that you cite in your comment: "The UK's Online Safety Act requires operators of 'user to user services' to read through hundreds (if not thousands) of pages of documentation to attempt to craft "meaningful" risk assessments and 'child access assessments' or face £18,000,000 fines, even imprisonment."
- with SMT (11 days ago, 47 comments) https://news.ycombinator.com/item?id=44259476
- with APL (10 days ago, 1 comment) https://news.ycombinator.com/item?id=44273489 and (8 days ago, 20 comments) https://news.ycombinator.com/item?id=44275900
- with MiniZinc (1 day ago, 0 comment) https://news.ycombinator.com/item?id=44353731
Largely so from a programming perspective it becomes a simplified version of Einstein's Riddle that I showed the class, doing in a similar way.
https://theintelligentbook.com/willscala/#/decks/einsteinPro...
Where at each step, you're just eliminating one or more possibilities from a cell that starts out containing all of them.
Queens has fewer rules to code, making it more amenable for students.
I won't say this reduces the "Haskell is imposing" to zero, but a non-trivial amount of the initial impression of imposingness is just the very different syntax, such as the way functions are not called with parentheses after the function name. But the different syntax isn't really that big a deal. You just don't know it and aren't used to it. Under the hood it does have some differences, but the differences are magnified when you try to swallow the surface differences and the deep differences all in one shot. Nobody who knows Haskell did that; they learned it the same way you learn any other language, one bit at a time.
Deleted Comment
That will produce challenging boards ?
1) It's not too hard to make a problem with at least one solution (just put the queens down first, then draw boxes), but there isn't any good way of making levels with unique solutions.
2) Once you've accomplished that, it's hard to predict how hard a level will be, and then it's hard to make levels easier / harder.
I happen to be currently researching this topic (well, I'm doing all kinds of these grid-based puzzles, but this is an example). The algorithm tries to make "good" levels, but there is a good probability it will end up with something useless we need to throw away, and then try again.
It's easy to make levels which are trivial, and similarly easy to make levels which are far beyond human ability, but hitting things in the 'human tricky but solvable' sweet-spot is where most of the difficulty comes from.
I should probably try writing up a human-readable version of how I do it. It involves a bunch of Rust code, so I can hit a whole bunch of trendy topics!
Do you have a blog? I'm interested.
If the base solver you have is a system that can be run in various configurations with different levels of reasoning and assumption as well as a report on the amount of search needed if any, that can be very useful as a way to measure the hardness. In Sudoku as a Constraint problem (https://citeseerx.ist.psu.edu/document?doi=4f069d85116ab6b4c...), Helmut Simonis tested lots of 9x9 Sudoku puzzles against various levels of propagation and pre-processing as a way to measure the hardness of Sudoku puzzles by categorizing them by the level of reasoning needed to solve without search. The MiniZinc model for LinkedIn Queens (https://news.ycombinator.com/item?id=44353731) can be used with various solvers and levels of propagation as such a subroutine.
Now, for production-level puzzle making, such as what King does for Candy Crush, the problems and requirements are even harder. I've heard presentation where they talk about training neural networks to play like human testers, so not optimal play but most human like play, in order to test the hardness level of the puzzles.
A common opinion is that a good board is solvable without the use of backtracking. A set of known techniques should be enough to solve the board. To validate if a board is "fun" you need to have a program that can solve the board using these known techniques. Making that program is much harder than just making a general solver. And then you need to find the boards that can be validated as fun. Either you search through random boards, or you get clever...
I'm trying to use it during the generation process to evaluate the difficulty a basic heuristic I'm trying to work with is counting the number of times a particular colour is eliminated - the higher the count the harder the problem since it requires more iteration of the rules to solve. (A counter example to this would be a board with 1 colour covering everything except the cells a queen of the other colours needs to be placed on)
Also I'm trying to evaluate the efficacy of performing colour swaps but it's proving more challenging than I thought. The basic idea is you can swap the colours of neighbouring cells to line up multiple colours so there are less obvious "single cells" which contains the queen. The problem with this is it can introduce other solutions and it's difficult to tell whether a swap makes the puzzle harder or simpler to solve.