Skip to content

Commit

Permalink
fix: optimize analyze for performance
Browse files Browse the repository at this point in the history
  • Loading branch information
dmarsic committed Apr 25, 2023
1 parent e8d5ace commit ebdc2e2
Show file tree
Hide file tree
Showing 2 changed files with 33 additions and 15 deletions.
37 changes: 30 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,24 +115,47 @@ section 100
section 1000
0.3, terms=1 : 0, 0.3s
0.3, terms=2 : 0, 0.3s
0.2, terms=2 : 0, 0.2s
0.3, terms=3 : 0, 0.3s
section 10000
2.8, terms=1 : 0, 2.8s
2.8, terms=2 : 0, 2.8s
3.0, terms=3 : 0, 3.0s
2.7, terms=1 : 0, 2.7s
2.7, terms=2 : 0, 2.7s
2.7, terms=3 : 0, 2.7s
section 52478
18.4, terms=1 : 0, 18.4s
15.3, terms=2 : 0, 15.3s
15.4, terms=3 : 0, 15.4s
15.1, terms=1 : 0, 15.6s
15.4, terms=2 : 0, 15.1s
15.6, terms=3 : 0, 15.2s
```

Datasets of around 1000 entries might generate reasonable search times,
which is the intended use case for TinySearch. Still, there is probably
room for improvement.

## Can we make it faster?

Most time is spent in analyzer, so improving performance means
improving processing time of the analyzer. The default
`SimpleEnglishAnalyzer` has already been highly optimized.

The next step to consider is to split the search into two phases:
indexing and searching. Since analyzer needs to process every document,
indexing can happen earlier in the process execution and searching when
the user requests it. This has an additional benefit of indexing once
and searching multiple times.

```python
from tinysearch.index import Index
from tinysearch.search import Search

i = Index(docs)

# ...later...
s = Search(i, query)
print(s.results.matches[0])
```

## License

See LICENSE.
11 changes: 3 additions & 8 deletions tinysearch/analyzer.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,9 @@
It has a single public method: analyze
analyze method will be called a lot, it needs to be optimized for
performance as much as possible.
Example usage:
a = Analyzer()
Expand Down Expand Up @@ -43,12 +46,4 @@ def analyze(self, text: str) -> List[str]:
tokens = re.split(r"\s+", text)

# Apply transformations on each token.
# new_tokens = []
# for token in tokens:
# token = self.remove_nonchars(token)
# token = self.lower(token)
# token = self.stem(token)
# new_tokens.append(token)
# tokens = new_tokens
# return tokens
return [self.stem(self.lower(self.remove_nonchars(token))) for token in tokens]

0 comments on commit ebdc2e2

Please sign in to comment.