ICFP Programming Contest 2021This is team "Rotten Lambdas" post-mortem for the ICFP Contest 2021.
We participated as a team of three: Chris, Daniel, and Jan. We prefer to meet in person for the contest, which worked out nicely this year since Jan, who lives in Vienna, happened to be on vacation near Hannover.
The contest started at 14:00 local time, so we met for lunch and then went to Chris' and Daniel's office to set up our notebooks and PCs.
We read the task specification and came up with a list of things we thought we might need:
- Data structures with JSON (de)serialization.
- A graphical tool for visualization.
- Ideally, support for moving nodes while respecting constraints.
- An interface to the portal API.
- Computing and showing the number of dislikes for our solutions.
- An automatic solver.
We distributed the initial work:
- Jan: Look and play with the portal + API; write support scripts if necessary. Also, look at the initial set of problems to get a feel for them and think about strategies for solving them.
- Chris: quickly hack up a simple visualization tool by adapting stuff from previous contests. Then start with basic automatic solving.
- Daniel: Develop a fully-featured interactive tool for manually solving problems. Have a look at existing libraries and tools first to see if anything might be a good fit.
We kept on extending the simple tool with a few more features that were easy to implement, most importantly selecting a number of vertices and moving only those.
For manual solving, we tried to divide problems into categories like "messy spaghetti" (move the entire thing into a valid position to have a valid submission at all), "somewhat simple" (either a perfect solution exists and can probably be found manually or it is obvious that no perfect solution exists), "tough perfect" (a perfect solution exists but isn’t easy to get manually, but might be possible) and "tough but doable" (it isn’t super obvious how to find a solution but maybe folding things inward works).
Sadly, the stuff that was too hard to do manually also was too complex for automated solving, but with some extra features on the GUI like drag and drop edges, edge length showing, and color indicators for too long or short edges we got a bunch of the somewhat simple and even tough perfect ones sorted and got at least valid (but bad) submissions for some.
At 14 hours into the contest, Chris had written a simple brute force solver. The problems were mainly in getting the geometry parts right, particularly checking whether edges lie completely inside the hole, where "inside" includes edges coinciding with the hole's edges or just touching one vertex of the hole. Here is an example - the green edges are inside the hole, the red one not:
Combined with most of the problems being too large for a brute force approach, these bugs led to this solver not being very effective. However, Chris continued to work on this because we fully expected a larger number of problems to show up at some point (i.e., too many to solve manually) and thought an automatic solver would be indispensable later on.
With our combination of manual and automatic solutions, we ended up in place 21 in the lightning round.
On Saturday evening, all of us were invited to birthday parties, plus we decided that we had better sleep a bit, which we had not done during the lightning round. So we decided to continue on Sunday, 10:00.
On Sunday, Daniel's more capable interactive tool was ready to use, including for example
flipping parts of the figure around an edge.
For some figures, it was necessary to try and make them more compact to fit through the hole. This was mainly achieved by first identifying sections that could easily be folded inward and then rotating and moving the structure to have as few vertices outside as possible before going into the "move stuff and resolve cascading issues" mode.
Other figures were easy to fit but the challenge was to stretch them out as much as possible to reach near the edges. This was a matter of identifying loosely connected parts which could easily be dragged somewhere else and stretching parts like squares into long lines.
During the rest of the contest, we also implemented more automatic solving strategies:
- Perfect solver: For problems with a perfect solution (i.e. 0 dislikes): try all permutations of hole vertices on figure vertices, then fit the remaining vertices.
- Hairball optimizer: for vertices only connecting to two others, check if the other valid position gets fewer dislikes. Not very effective. To implement it, we needed the formula to compute the possible location(s) (0, 1, or 2) of a vertex given the two other vertices and the edge lengths. We thought that would be a simple one-line formula, but Wolfram Alpha gave us a bunch of inscrutable monster expressions. Which happened to actually work after translating them into Java. We later incorporated it into the other solvers (thereby introducing a correctness bug into the first and most effective one, which we only noticed after the contests end. Too bad).
- “Jan” solver (for lack of a better name, as the strategy was his idea): start with an edge of the figure exactly matching an edge of the hole. Then recursively fit open vertices having at least on edge to any vertices already placed.
None of these was a big improvement over the initial solver. Unfortunately, we did not record which of our solutions were done by hand and which ones by solvers. Our guess would be about 50/50.
We did not make use of the bonus locations to any great extent. Only two bonuses we could easily score by moving some edges manually were used to optimize dislikes in one case and get a valid solution with one too-long edge in another.
Our final ranking was place 40 (out of 160) in the main division.
In retrospect, we feel that we would have performed better if we had concentrated on either
purely manual or purely automatic solving. If we had decided on a assisted manual solving early
on, our tools would probably hove gotten more transformations and ease-of-use features.
If we had concentrated on automatic solving, we would certainly have fixed the bugs in our geometry code and for example added a visualizer to the automatic solving to gain more insight into promising approaches.
Our performance notwithstanding, this was our favorite contest since several years. A clear and concise task description practically without errors or ambiguities. Solution submission including visualization worked very well and was useful to get a feel for the problems. The task was fun and manageable for both single-person as well as larger teams, we think. We tremendously enjoyed all 72 hours of this year’s contest. Many thanks to the organizers!
Our repo is on github.