There are a number of things not present in "normal" imports that we
nevertheless support and need to be tested:
* broken timezone adjustment (+051800->+0261; observed in the wild
in real repos, and adjustment prevents fast-import from dying)
* commits missing an author (observed in the wild in a real repo;
just sets author to committer)
* optional additional linefeeds in the input allowed by
git-fast-import but usually not written by git-fast-export
* progress and checkpoint objects
* progress, checkpoint, and 'everything' callbacks
Signed-off-by: Elijah Newren <newren@gmail.com>
While most users of filter-repo will just use it as a tool and
RepoFilter.run() is the final function, filter-repo can be used as a
library with additional work being done after calling that function.
So, simply return from that function when it is done rather than calling
sys.exit.
Signed-off-by: Elijah Newren <newren@gmail.com>
The everything_callback took two arguments, the first being a string
that was the name of the type of the second argument. There was no
point for this argument; someone can just compare type(second) to the
relevant classes. Remove it.
Signed-off-by: Elijah Newren <newren@gmail.com>
The only times this is ever printed is when debugging filter-repo
itself, or when trying to add tests to get to 100% line coverage. But
the printing was broken when objects were skipped (which caused a
mapping from int -> None). Fix the format specifier to handle this
case too.
Signed-off-by: Elijah Newren <newren@gmail.com>
We don't expect to ever get progress or checkpoint directives in normal
operation, but the --stdin flag makes it a possibility. In such a case,
the progress directives could actually break our parsing since
git-fast-import will just print it to its stdout, which is what we read
from to find new commit names so we can do commit message hash updating.
So, pass these along to a progress_callback, but don't dump them by
default. Also, it is not clear checkpoint directives make sense given
that we'll be filtering and only getting a subset of history (and I'm
dubious on checkpoint's utility in general anyway as fast-import is
relatively quick), so pass these along to a callback but don't use them
by default.
Signed-off-by: Elijah Newren <newren@gmail.com>
We don't run fast-export with rename detection, even though we have
code for handling it, because we decided to use a rev-list|diff-tree
pipeline instead. The code was manually tested and determined to be
working and it might be useful in the future so I don't want to just
outright delete it, but since we know we can't trigger it right now,
add a
# pragma: no cover
on these lines so it doesn't show up on coverage reports.
Signed-off-by: Elijah Newren <newren@gmail.com>
This also generates line coverage statistics for t/t9391/*.py, but the
point is line coverage of git-filter-repo.
Signed-off-by: Elijah Newren <newren@gmail.com>
Since there were multiple places in the code where we returned early
knowing that we didn't have a translation of old_hash to a new_hash, we
need to update _commits_referenced_but_removed from each of them.
Signed-off-by: Elijah Newren <newren@gmail.com>
Due to the invariants we maintain with _commit_renames and
_commit_short_old_hashes (the latter always gets an extra entry in
either a key or a value whenever _commit_renames gains a new key/value
pair), there were a few lines of code that we could not ever reach.
Replace them with an assertion that the condition used for them is
never true.
Signed-off-by: Elijah Newren <newren@gmail.com>
The test does check the exact output for the report, meaning if the
output is changed at all this test will need to be updated, but it at
least makes sure we are getting all the right kinds of information. I
do not expect the output format will change very often.
Signed-off-by: Elijah Newren <newren@gmail.com>
The former logic for keeping track of whether we had seen annotated tags
(and thus whether they were interesting and should avoid being pruned)
was just plain buggy. I do not know if it was that bad from the start
or there was other surrounding code that made it different that got lost
in one of my history rewrites, but fix it. I'll include tests of it
with --subdirectory-filter shortly.
Signed-off-by: Elijah Newren <newren@gmail.com>
We previously would abort if we had been requested to rename files and
that caused two different files to go to the same path. However, if
the files have identical contents and mode, then we can treat the
request as a desire from the user to just coalesce the extra copies.
Signed-off-by: Elijah Newren <newren@gmail.com>
Pruning of commits which become empty can result in a variety of
topology changes: a merge may have lost all its ancestors corresponding
to one of (or more) of its parents, a merge may end up merging a commit
with itself, or a merge may end up merging a commit with its own
ancestor. Merging a commit with itself makes no sense, so we'd rather
prune down to one parent and hopefully prune the merge commit, but we do
need to worry about whether the are changes in the commit and whether
the original merge commit also merged something with itself. We have
similar cases for dealing with a merge of some commit with its own
ancestor: if the original topology did the same, or the merge commit has
additional file changes, then we cannot remove the commit. But,
otherwise, the commit can be pruned.
Add testcases covering the variety of changes that can occur to make
sure we get them right.
Signed-off-by: Elijah Newren <newren@gmail.com>
Due to pruning of empty commits, merge commits can become degenerate
(same commit serving as both parents, or one parent is an ancestor of
one of the others). While we usually want to allow such degenerate
merge commits to themselves be pruned (assuming they add no additional
file changes), we do not want to prune them if the merge commit in the
original repository had the same degenerate topology. So, we need to
keep track of the ancestry graph of the original repository as well and
include it in the logic about whether to allow merge commits to be
pruned.
Signed-off-by: Elijah Newren <newren@gmail.com>
There are several cases to worry about with commit pruning; commits
that start empty and had no parent, commits that start empty and
had a parent which may or may not get pruned, commits which had
changes but became empty, commits which were merges but lost a line
of ancestry and have no changes of their own, etc. Add testcases
covering these cases, though most topology related ones will be
deferred to a later set of tests.
Signed-off-by: Elijah Newren <newren@gmail.com>
The reason we want to sometimes keep commits that start empty is because
they may have been intentionally added for build or versioning reasons.
Not all commits that start empty are useful, even if intentional,
though, because they could have pre-dated the introduction of a
directory we are filtering for. So, we always allowed an exception that
if the number of parents had been reduced, we also allow pruning commits
that started empty.
However, there is a similar case: one or more contiguous chunks of
history may only touch some directories/files that are not of interest;
empty commits within that range of history are likewise uninteresting to
us. Since "interesting" empty commits are of the form some new commit
on top of interesting history (because otherwise it loses its special
build or versioning utility), we should loosen the rules to also
consider that empty commits whose parent was pruned are also prunable;
we no longer use the existence of some other distant ancestor of that
empty commit in determining whether the empty commit is prunable.
Signed-off-by: Elijah Newren <newren@gmail.com>
Due to the special handling of 'from' in the fast_export stream and the
aggregation of the 'from' commit with the 'merge'd commits, a parentless
commit has its parents represented as [None] rather than []. We had
fixed this up in other places, but forgot to do so with orig_parents,
breaking our comparison. Handle it for orig_parents too.
Signed-off-by: Elijah Newren <newren@gmail.com>
fast-import from versions of git up to at least 2.21.0 had a bug in the
handling of the get-mark directive that would cause it to abort with a
parsing error on valid input. While a fix has been submitted upstream
for this, add some extra newlines in a way that will work with both old
and new git versions.
Signed-off-by: Elijah Newren <newren@gmail.com>
Many of the callback functions might only be a single line, and as such
instead of forcing the user to write a full blown program with an import
and everything, let them just specify the body of the callback function
as a command line parameter. Add several tests of this functionality as
well.
Signed-off-by: Elijah Newren <newren@gmail.com>
Add callbacks for:
* filename
simplifies filtering/renaming based solely on filename; return
None to have file removed, or original or new name for file
* message
simplifies tweaking both commit and tag messages; if you want to
tweak just one of the two, use either tag_callback or
commit_callback
* person_name
simplifies tweaking actual names of people without worrying where
they come from (author, committer, or tagger)
* email:
simplifies tweaking email addresses without worrying where they
come from (author, committer, or tagger)
* refname:
simplifies tweaking reference names, regardless of whether they
come from FastExport commit objects, reset objects, or tag objects
Signed-off-by: Elijah Newren <newren@gmail.com>
I want to allow callbacks that could operate on similar pieces of commit
or reset or tag objects (e.g. reference names, email addresses);
restructure the current ones slightly to both allow more general ones to
be added and to make the existing ones slightly clearer.
Signed-off-by: Elijah Newren <newren@gmail.com>
Users may want to run --analyze both before and after filtering in
order to both find the big objects to remove and to verify they are
gone and the overall repository size and filenames are as expected.
As such, aborting and telling the user there's a previous analysis
directory in the way is annoying; just remove it instead.
Signed-off-by: Elijah Newren <newren@gmail.com>
Users may want to run multiple filtering operations, either because it's
easier for them to do it that way, or because they want to combine both
path inclusion and exclusion. For example:
git filter-repo --path subdir
git filter-repo --invert-paths --path subdir/some-big-file
cannot be done in a single step. However, the first filtering operation
would make the repo not look like a clean clone anymore (because it is
not a clean clone anymore), causing the safety check to trigger and
requiring the --force flag. But once we've allowed them to do
repository rewriting, there's no point disallowing further rewriting.
So, write a .git/filter-repo/already_ran file when we run and treat the
presence of that file the same as providing a --force flag.
Signed-off-by: Elijah Newren <newren@gmail.com>
We need to have a list of references to rewrite (which we pass along to
fast-export), but we accepted arbitrary rev-list args. This could
backfire pretty badly if a user tried the wrong but somewhat
straightforward
git filter-repo --invert-paths --path foo bar
instead of the expected
git filter-repo --invert-paths --path foo --path bar
because the passing of 'bar' as a rev-list arg means that fast-export
happily notices that some kind of rev-list was specified but not a
meaningful one so it gives an empty output...and filter-repo interprets
an empty output as "This is a history with no commits" and promptly
deletes everything.
Partial history rewrites aren't yet properly supported anyway (I would
need to stop doing the disconnect-of-origin-remote, the deleting of
references that were filtered-away or otherwise didn't show up, and
the post-rewrite gc+prune). When I add such support, I'll revisit
how these arguments can be specified.
Signed-off-by: Elijah Newren <newren@gmail.com>
Commit.dump() showed up in a profile. Reorganize the code slightly to
build up much of the string into one big chunk before calling
file_.write(); this shaves a few percent off the total runtime. (Where
total runtime is again measured in terms of the
cat fast-export.original | git filter-repo --stdin --dry-run ...
trick mentioned a few commits back.) Trying to make a [c]StringIO
object in order to build more of the string up into a single place
to reduce the number of file_.write() calls was apparently
counter-productive, so only the header before the parents is combined
into a single string.
Signed-off-by: Elijah Newren <newren@gmail.com>
We repeatedly hit the same filenames over and over as we traverse
history, but our expressions for renaming or filtering within the
newname() function are based solely on the filename and thus will always
give the same answer. So record any answer we get and just use it
whenever we hit the same filename again.
If the filtering expressions contain only a single short pathname, this
has no measurable effect, but for several paths (e.g. listing all
builtin/*.c files individually in git.git) it can add up to a few
percent of overall runtime.
Signed-off-by: Elijah Newren <newren@gmail.com>
Repeatedly using non-compiled regexes is rather wasteful of resources.
Pre-compile these and use the cached versions.
I ran
git filter-repo --invert-paths --path configure.ac --dry-run
and then for timing ran
cat .git/filter-repo/fast-export.original | time git filter-repo \
--invert-paths --path configure.ac --dry-run --stdin
on the git.git repository (with tags of blobs and tags of tags deleted).
Comparing the timings before and after this change, I see about a 13%
overall speedup just from caching the regexes.
Signed-off-by: Elijah Newren <newren@gmail.com>
Python wants filenames with underscores instead of hyphens and with a
.py extension. We really want the main file named git-filter-repo, but
we can add a git_filter_repo.py symlink. Doing so dramatically
simplifies the steps needed to import it as a library in external python
scripts.
Signed-off-by: Elijah Newren <newren@gmail.com>
Most filtering operations are not interested in the time that commits
were authored or committed, or when tags were tagged. As such,
translating the string representation of the date into a datetime object
is wasted effort, and causes us to waste more time later as we have to
translate it back into a string.
Instead, provide string_to_date() and date_to_string() functions so that
callers can perform the translation if wanted, and let the normal case
be fast.
Provides a small but noticable speedup when just filtering based on
paths; about a 3.5% improvement in execution time for writing the new
history.
Signed-off-by: Elijah Newren <newren@gmail.com>
We have to ask fast-import for the new names of commits, but doing so
immediately upon dumping out the commit related information requires
context switches and waiting for fast-import to parse and handle more
information. We don't need to know the new name of the commit until we
run across a subsequent commit that referenced it in the commit message
by its old ID.
So, speed things up dramatically by waiting until we need the new name
of the commit message (or the fast-import output pipe we are
communicating with should be getting kind of full) before blocking on
reading new commit hashes.
Signed-off-by: Elijah Newren <newren@gmail.com>
Treat fast_import_pipes more like the other parameters to
FastExportFilter.run(), both for consistency, and because it will allow
us to more easily defer doing blocking reads for new commit names until
we actually need to know the new commit hashes corresponding to old
commit ids.
Signed-off-by: Elijah Newren <newren@gmail.com>
Once we rewrite history, our history will be unrelated to our original
upstream. As such, migrate refs/remotes/origin/* to refs/heads/*
(our sanity check already verified that if both names exist they are
equal; if the user used --force then just delete the remote tracking
branch and leave the local branch as is), and then delete the 'origin'
remote.
This has a few advantages:
* People expect to work with refs/heads/*, not refs/remotes/origin/*,
and will be more likely to write filters based on those.
* People will probably need to push the new history somewhere when
they are done, and it's easier if we have it in refs/heads/* than
if it's under refs/remotes/origin/*.
* It encourages people to use good hygiene and not mix old and new
histories. If users really want, they can push the repo (or parts
thereof) back over their original history, but they should have to
take extra steps to do it instead of just a 'git push'.
Signed-off-by: Elijah Newren <newren@gmail.com>