wordler
is a Wordle auto-solver which uses a tree-pruning strategy to compute sequential optimal guesses, able to solve most games in 3 or 4 guesses.
Rather than just assisting a player by suggesting valid remaining words, wordler
aims to compute the optimal guess based on all prior knowledge. Benchmarking shows wordler
is able to guess the correct answer in 3-4 guesses on average depending on dictionary size used.
Instantiate a wordler strategy with an initial guess, and a dictionary size. init_guess
can be any valid 5-letter word, and top_n
will populate the dictionary with the n highest ranked 5-letter words in the Project Gutenberg frequency list.
from wordler import PruneStrategy, Tile
strat = PruneStrategy(top_n=4000, init_guess="stare")
Assuming you are manually playing the game, you can update the strategy with each result, and extract the next optimal guess.
strat.update_state(
{
"word": "stare",
"result": [ Tile.GREY, Tile.YELLOW, Tile.GREY, Tile.GREY, Tile.GREY ]
}
)
guess = strat.decide_guess()
print(guess)
This will print the next guess:
month
To simulate the strategy on a wordle game with a particular answer, instantiate a Wordle
object with an answer, and pass it to the strategy. The strategy will perform repeated guesses, updating its internal state with the gained knowledge and pruning its word list accordingly.
You can run the strategy over many games this way to benchmark its performance:
from wordler import Wordle
for answer in [ "crate", "groom", "hello", "tiger", "proxy" ]:
# Set up Wordle game with given answer
wordle_game = Wordle(answer)
# Initialize a PruneStrategy with the 4000 most common words in the dictionary and run it
strat = PruneStrategy(top_n=4000, init_guess="stare")
strat.run_strategy(wordle_game)
print( answer, strat.guesses_made )
This will print:
crate ['stare', 'trace', 'crate']
groom ['stare', 'croon', 'brood', 'proof', 'groom']
hello ['stare', 'olden', 'hello']
tiger ['stare', 'tenor', 'tiger']
proxy ['stare', 'croon', 'proud', 'proxy']
By considering the uncertaintly reduction that results from every (guess, answer) pair composed of words from the remaining possible word list, PruneStrategy
selects the guess that maximises the average uncertaintly reduction across all answers.
The uncertainty reduction is just the number of words we are able to rule out after making a particular guess, and is also dependent on knowledge gained from previous guesses.
benchmarking.py
provides some functions to benchmark a strategy and perform other analyses:
bench_simple(fn,top_n)
: Pass a functionfn
that instantiates a strategy, and dictionary sizetop_n
, and this will compute the number of guesses required to obtain every possible answer in the dictionary.
import benchmarking
def strat_factory():
return PruneStrategy(init_guess="raise",top_n=2000)
benchmarking.bench_simple(strat_factory,2000)
starting_guess(top_n)
: For a dictionary containing thetop_n
most frequent words, find the best starting guess words based on the average wordlist reduction obtained. For example, the following are some of the better words when using a dictionary of size 2000:
word avg. reduction
------------------------------
great 1915.8809404702274
those 1916.736368184117
years 1938.3111555777882
least 1946.3071535767901
tears 1956.0560280140282
raise 1957.1265632816398
With this dictionary size, the guess great rules out 1915 words on average, which is not as good as raise which averages 1957.