* templates: cache torrent view filelist
Using flask-caching, we can add a 1 hour cache to the template
output of a filelist, varying it by the key "filelist" + the
hex infohash of a torrent.
Using a very big filelist as a test, I get a difference in page
load speeds of about a magnitude. (400ms -> 37 ms)
* templates: increase filelist cache to 24 hours
This commit introduces a regex to replace illegal (expectedly unused)
characters from torrent display name, information link and description
upon upload or edit.
Fixes#541
If a user has a comment under the edit time limit in a comment
locked torrent, but also are still affected by the new account CAPTCHA
cooldown, the template would throw an error as we tried to getattr on
a None object (namely, the comment_form).
To fix this, we also need to check around the edit form whether the
comment_form exists.
Someone put this inside the loop despite it essentially being
constant. Probably makes immeasurably little difference perf-wise,
but why not fix it anyway.
Apparently, when running things with Python 3.7, these three
dependencies ran into build issues with a renamed struct field.
Upgrading them seems to fix the issue, and hopefully keeps them
working with Python 3.6 as well.
Windows has a few special filenames that it does not allow the
explorer.exe and command line to see, but can still be created by
applications. This is due to some jank DOS compatibility.
These filenames can be abused to troll Windows users, so we should
probably blacklist them.
Adds a new config entry (EMAIL_SERVER_BLACKLIST, tuple of IPv4 addresses
as strings) and an email validator for registering, which will query all
the MX records for the domain and reject the registration if any of the
A records for any of the MX records are found in the blacklist.
If the query fails, the blacklist is ignored; the email is accepted.
Infobubble text is now in a separate file, along with a timestamp,
so that changes to it don't result in merge conflicts too often.
We also add some JS to make the bubble dismissible, keeping track
of the last timestamp that was dismissed in localstorage.
Removes the specialized template ES magnet creator, since create_magnet()
can use both Torrents and ES objects. Search results will get the
properly escaped magnets, now.
Slightly optimizes the tracker adding and string joins.
RIP base32, wonder how many bad clients will break with sha1.
A cosmetic change.
Swapping quote_via to quote, we don't convert spaces to pluses, keeping
the name intact (which only mattered until peers send the metadata).
Using Flask-Caching, we can memoize the magnet_uri method. Here, a
timeout of 1 hour is chosen, though that value can be fiddled with.
The cache is defined in extensions.py, but gets initialised in
__init__.py.
* Add config option to enable/disable gravatar
This is useful when running a development instance behind a firewall
or NAT, where gravatar cannot reach you to serve up the default user
avatar.
* Pregenerate Gravatar default image URLs
If possible (i.e. SERVER_NAME is set), we can pregenerate the constant
gravatar default URL once at application startup, and re-use that,
as url_for calls are surprisingly expensive.
Especially on torrent view pages with lots of comments, this cuts down
on url_for calls massively, saving on my system about 0.3 ms per call.
Now uses flask.flash to give the person who clicks it feedback
when it's clicked, because caching may make things confusing.
Also only activate a user if they're inactive, as again, caching
can lead to staff pressing the button multiple times in a row,
leading to unnecessary log messages.
* Implement range bans
People connecting from banned IP ranges are unable to upload
torrents anonymously, and need to manually have their accounts
activated.
This adds a new table "rangebans", and a command line utility,
"rangeban.py", which can be used to add, list and remove rangebans
from the command line.
As an example:
./rangeban.py ban 192.168.0.0/24
This would rangeban anything in this /24.
The temporary_tor column allows automated scripts to clean out and
re-add ever-changing sets of ranges to be banned without affecting
the other ranges.
This has only been tested for IPv4.
* Revise Rangebans
Add an id column, and change "temporary_tor" to "temp". Also
index masked_cidr and mask.
* rangebans: fix enabled and the binary op
kill me
* Add enabling/disabling bans to rangeban.py
* rangebans: fail earlier on garbage arguments
* rangebans: fix linter errors
* rangeban.py: don't shadow builtin keyword 'id'
* rangebans: change temporary ban logic, column
The 'temp' column is now a nullable time column. If the field is
null, the ban is understood to be permanent. If there is a time
in there, it's understood to be the creation time of the ban.
This allows scripts to e.g. delete all temporary bans older than
a certain amount of time.
Also, rename the '_cidr_string' column to 'cidr_string', because
reasons.
* rangeban.py: use ip_address to parse CIDR subnet
* rangebans: fixes to the mask calculation and query
Both were not bugs per-se, but just technically not needed/correct.
* De-meme apparently
...by splitting input into characters, instead of whitespace delimited
words. This means you can now match partial words, real substrings from
anywhere: "foo ba" will match "Foo Bar Baz", while previously you had to
have full words ("foo bar") to match anything.
My dev setup incurred an 8% increase in storage usage, from ~13MB to
~14MB (for ~40k torrents).
Small change, big improvement. Wonder why I didn't do this at first.
* user page: add manual activation button for mods
Moderators can press this button on inactive users to manually
activate their accounts.
Furthermore, the admin form code has been refactored a bit, reducing
some code duplication.
Before, long.tokens.with.dots.or.dashes would get edgengrammed up to the
ngram limit, so we'd get to long.tokens.wit which would then be split -
discarding "with.dots.or.dashes" completely. The fullword index would
keep the complete large token, but without any ngramming, so incomplete
searches (like "tokens") would not match it, only the full token.
Now, we split words before ngramming them, so the main index will
properly handle words up to the ngram limit. The fullword index will
still handle the longer words for non-ngram matching.
Also optimized away duplicate tokens from the indices (since we rely on
boolean matching, not scoring) to save a couple megabytes of space.
* added es_sync_config.json and minified js to .gitignore
* reordered README.md to reflect that MySQL Binlogging must be enabled before running import_to_es.py
* Extend ES term preprocessing for OR groups
Implements handling "foo"|"bar" literal OR groups in the Elasticsearch
term preprocessor. Groups can be negated with -, but don't mesh with
precedence (like plain literals).
This is a partial hack, the real solution would be to parse the entire
search terms ourselves, with AND and OR groups, negations etc. But
having that work neatly with the simple_query_string would be bit of a
hassle.
* Update help.html search tips
since search (quoting strings) has changed a bit.
* Optimize Elasticsearch fullword field
Since the main display_name field ngrams words up to 15 characters,
anything to and under that will already be indexed - the fullword field
(which we have for words longer than 15 characters) needs to index only
words longer than that.
* Preprocess ES terms for better literal matching
This commit adds a new .exact subfield to display_name, which holds a
barely-filtered version of the original title we can do "literal"
matching against. This is not real substring matching, but quoting
terms now actually does something!
Implements a simple preprocessor for the search terms to extract quoted
parts from the search terms, optionally prefixed with - to negate them.
The preprocessor will create a query that'll join all three query-types:
the simple_query_string, must-phrases and must-not-phrases.
Hitting the cancel button does not return "", but null. Therefore
the toLowerCase() fails, and throwing an exception means "sure go
ahead submitting this" to JS for some godforsaken reason.
Just remove the toLowerCase for now, have people type the names
properly.