Leela Chess Zero Blog
Recently, the transformer architecture has dominated domains as diverse as vision and natural language processing. Over the past two years, the Lc0 team has been trying to answer the following question:
What chess-specific enhancements can be made to the transformer architecture?
To explore the performance of Lc0 networks relative to DeepMind’s state-of-the-art transformer networks, we embarked on a comparative analysis, inspired by the methodologies detailed in DeepMind’s latest publication. Our objective was to closely align our testing approach for Lc0 networks with the evaluation framework applied by DeepMind, allowing for a direct comparison of results.
Since its first games almost 3 months ago, LeelaKnightOdds has played over 1800 matches against a variety of opponents at a multitude of time controls.
In the recent blog post presenting the new WDL contempt feature added in the Lc0 v0.30 release, we shared our plans to add a lichess bot for piece odds games. While allowing arbitrary piece odds poses several challenges, which aren’t resolved yet, we are proud to announce that we made a big first step towards that goal with LeelaKnightOdds, now accepting your challenge on lichess.
The imminent v0.30.0 Lc0 release has two main features, attention body net support and WDL rescale/contempt. This blog post is about the latter, which is continuing our past efforts on providing more realistic WDL predictions with Lc0.
2022 has been a great year for Leela. A lot of new contributors appeared and made significant improvements; overall, Leela has become considerably stronger and more interesting. This year has brought huge changes to how the search was conducted, network architecture, available backends and more.
It’s been a while since we released a new version of Lc0, but we finally put out v0.29.0 a few weeks ago. In this post, we’ll talk about what’s new in this release and why it took so long to come out.
In the Leela Chess project, we generate a huge amount of data. We use them to generate the network files to use with Lc0 for further data generation, but also with other chess engines, like Ceres. The same data are often used by individual project contributors to generate additional network files using the “supervised learning” approach.
It all started a couple of months ago. First the Stockfish 13 release announcement and shortly later the Lc0 v0.27.0 one contained identical language that both “teams will join forces to demonstrate our commitment to open source chess engines and training tools, and open data.” While the intention is still there and we stand behind this statement, we haven’t yet managed to make something more formal in this direction.
With this in mind, when I was looking for an April Fools’ joke for this year the idea to use LeelaChessZero training data to train a NNUE net seemed very enticing.
Unless you were living under a rock during the last year, you have probably heard of the revolution that has been happening in computer chess. That is assuming you are interested in computer chess, but if you are not then why are you reading this? We are talking about Efficiently Updatable Neural Networks (referred to as NNUE, giving new meaning to backronyms) allegedly discovered by Japanese monks on sacred FORTRAN punched cards. The introduction of NNUE to the Alpha-Beta search of Stockfish resulted in impressive gains, despite initial bugs and ridicule. Since then the dominoes have been falling one after the other and now almost all the top chess engines have a NNUE implementation. Obviously Lc0 couldn’t be far behind, so we proudly present LcFiSh, the latest incarnation of Lc0 with NNUE technology.