| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
It is undesirable to close a directory that we haven't read yet to free
up cache capacity, but it's worse to fail to open the next directory
because too many upcoming directories are pinned. This could happen
when sorting, because then we can't prioritize the already-opened ones.
|
|
|
|
|
|
|
|
|
| |
When sorting, we can be forced to pop an unopened directory. If enough
other directories are already open, that can lead to ENOMEM when we try
to open it synchronously. To avoid this, force allocations from the
main thread to be attempted even if they would go over the limit.
Also, fix the accounting in bftw_allocdir() if allocation fails.
|
| |
|
|
|
|
|
|
| |
Otherwise, bftw_ids() or bftw_eds() might keep going!
Fixes: 5f16169 ("bftw: Share the bftw_state between iterations of ids/eds")
|
|
|
|
|
| |
Maintaining balance and strict ordering at the same time forces too much
work onto the main thread.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
We used to have is_nonexistence_error() to consistently treat ENOENT and
ENOTDIR the same. Recently, we started considering EFAULT the same as
ENAMETOOLONG on DragonFly BSD to work around a kernel bug. Unify both
of these behind a more generic interface.
|
|
|
|
|
|
|
|
|
| |
DragonFly's x86_64 assembly implementation of copyinstr() checks the
wrong pointer when deciding whether to return EFAULT or ENAMETOOLONG,
causing it to always return EFAULT for overlong paths. Work around it
by treating EFAULT the same as ENAMETOOLONG on DragonFly.
Link: https://twitter.com/tavianator/status/1742991411203485713
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
bftw_cache_reserve() can lead to bftw_cache_pop(), which could close the
directory we're trying to unwrap! If that happened, we would try
dup_cloexec(-1), which would fail with EBADF, so there was no observable
bug. But it's better to avoid the whole situation.
|
|
|
|
|
|
|
|
|
|
| |
It's possible for pincount to drop to zero, then get incremented and
drop back to zero again. If this happens, we shouldn't add it to the
to_close list twice.
This should fix the intermittent hang on the macOS CI.
Fixes: 815798e1eea7fc8dacd5acab40202ec4d251d517
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bftw() implements depth-first search by appending files to a batch, then
prepending the batch to the queue. When we switched to separate file/
directory queues, this was only implemented for the file queue.
Unbuffered searches don't use the file queue, so they were all breadth-
first in practice.
This meant that iterative deepening (-S ids) was actually "iterative
deepening *breadth*-first search," a horrible strategy with no advantage
over regular breadth-first search. Now it performs iterative deepening
*depth*-first search, which at least limits its memory consumption.
Fixes: c1b16b49988ecff17ae30978ea14798d95b80018
|
| |
|
|
|
|
| |
Closes #65.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
The previous accounting didn't fully control the number of allocated
bfs_dirs, as the dirlimit was incremented once we popped the directory,
not when we freed it.
|
|
|
|
|
| |
A file can be on the to_open and to_read lists at the same time, but
otherwise only one list, so we can save memory by sharing the pointers.
|
|
|
|
|
|
|
| |
Now that the dirlimit provides backpressure on the number of open
directories, we can use a uniformly larger queue depth for increased
performance. The current parameters were tuned with a small grid search
on my workstation.
|
|
|
|
|
|
|
|
| |
For !BFS_USE_UNWRAPDIR, if a file is still pinned in bftw_closedir(), it
has to stay open until its pincount drops to zero. Since this happens
in bftw_ioq_pop(), we can't immediately call bftw_unwrapdir() as that
adds to the ioq. Instead, add it to a list that gets drained by the
next bftw_gc().
|
|
|
|
|
|
| |
I tried this before in #105 but it led to performance regressions. The
key to avoiding those regressions is to put some backpressure on how
many bfs_dir's can be allocated simultaneously.
|
| |
|
|
|
|
|
|
|
| |
This has the potential to fail on at least one known platform: macports
with the legacysupport implementation of fdopendir().
Link: https://github.com/macports/macports-ports/pull/19047#issuecomment-1636059809
|
|
|
|
|
|
|
| |
This fixes a storm of EMFILE retries observed with -j1 on a very large
directory tree.
Fixes: 7888fbababd22190e9f919fc272957426a27969e
|
|
|
|
|
| |
This required shuffling a lot of code around. Hopefully the new order
makes more sense.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Parallelism is controlled by the new -j flag.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
In anticipation of C23, since those headers won't be necessary any more.
|
| |
|