-
Notifications
You must be signed in to change notification settings - Fork 4
Open
Description
Crawling the publish directory might be slow for some big sites. There might be a few opportunities of optimizing it:
- Each
readdiralready performs astatsyscall, so doing it again inmight be redundantnetlify-plugin-ttl-cache/src/index.js
Line 21 in 54127d8
const { mtime } = await stat(file); - If no
excludeinput is specified, there is no need to perform atest()on the filename. Even though the default regular expressiona^should be fast and never match, it might become more expensive when performed thousands of times. - Directories part of
excludemight not need to crawled
There might also be some potential bugs with the directory crawling. For example, if a file was a symlink to one of its parent directory, would the crawline keep running until memory is exhausted?
I am wondering whether using a tried-and-tested library like readdirp might help fix all of this, and also simplify the code? What are your thoughts?
Metadata
Metadata
Assignees
Labels
No labels