They don't mention the approach I'd take for a naive spellcheck:
Generate a (giant) trie from the dictionary. Then perform a DFS on the trie allowing only <the current closest word's number of edits> edits from the input word, keeping track of the edits done so far - try no changes at each node first, then a deletion, then a change, then an insertion. This works better with a smaller alphabet size.
An optimization: a DAWG is potentially even better space-wise than a hashmap (effectively it ends up compressing the input data), but takes a while to precompute.
They don't mention the approach I'd take for a naive spellcheck:
Generate a (giant) trie from the dictionary. Then perform a DFS on the trie allowing only <the current closest word's number of edits> edits from the input word, keeping track of the edits done so far - try no changes at each node first, then a deletion, then a change, then an insertion. This works better with a smaller alphabet size.
An optimization: a DAWG is potentially even better space-wise than a hashmap (effectively it ends up compressing the input data), but takes a while to precompute.