We previously would abort if we had been requested to rename files and
that caused two different files to go to the same path. However, if
the files have identical contents and mode, then we can treat the
request as a desire from the user to just coalesce the extra copies.
Signed-off-by: Elijah Newren <newren@gmail.com>
Pruning of commits which become empty can result in a variety of
topology changes: a merge may have lost all its ancestors corresponding
to one of (or more) of its parents, a merge may end up merging a commit
with itself, or a merge may end up merging a commit with its own
ancestor. Merging a commit with itself makes no sense, so we'd rather
prune down to one parent and hopefully prune the merge commit, but we do
need to worry about whether the are changes in the commit and whether
the original merge commit also merged something with itself. We have
similar cases for dealing with a merge of some commit with its own
ancestor: if the original topology did the same, or the merge commit has
additional file changes, then we cannot remove the commit. But,
otherwise, the commit can be pruned.
Add testcases covering the variety of changes that can occur to make
sure we get them right.
Signed-off-by: Elijah Newren <newren@gmail.com>
Due to pruning of empty commits, merge commits can become degenerate
(same commit serving as both parents, or one parent is an ancestor of
one of the others). While we usually want to allow such degenerate
merge commits to themselves be pruned (assuming they add no additional
file changes), we do not want to prune them if the merge commit in the
original repository had the same degenerate topology. So, we need to
keep track of the ancestry graph of the original repository as well and
include it in the logic about whether to allow merge commits to be
pruned.
Signed-off-by: Elijah Newren <newren@gmail.com>
There are several cases to worry about with commit pruning; commits
that start empty and had no parent, commits that start empty and
had a parent which may or may not get pruned, commits which had
changes but became empty, commits which were merges but lost a line
of ancestry and have no changes of their own, etc. Add testcases
covering these cases, though most topology related ones will be
deferred to a later set of tests.
Signed-off-by: Elijah Newren <newren@gmail.com>
The reason we want to sometimes keep commits that start empty is because
they may have been intentionally added for build or versioning reasons.
Not all commits that start empty are useful, even if intentional,
though, because they could have pre-dated the introduction of a
directory we are filtering for. So, we always allowed an exception that
if the number of parents had been reduced, we also allow pruning commits
that started empty.
However, there is a similar case: one or more contiguous chunks of
history may only touch some directories/files that are not of interest;
empty commits within that range of history are likewise uninteresting to
us. Since "interesting" empty commits are of the form some new commit
on top of interesting history (because otherwise it loses its special
build or versioning utility), we should loosen the rules to also
consider that empty commits whose parent was pruned are also prunable;
we no longer use the existence of some other distant ancestor of that
empty commit in determining whether the empty commit is prunable.
Signed-off-by: Elijah Newren <newren@gmail.com>
Due to the special handling of 'from' in the fast_export stream and the
aggregation of the 'from' commit with the 'merge'd commits, a parentless
commit has its parents represented as [None] rather than []. We had
fixed this up in other places, but forgot to do so with orig_parents,
breaking our comparison. Handle it for orig_parents too.
Signed-off-by: Elijah Newren <newren@gmail.com>
fast-import from versions of git up to at least 2.21.0 had a bug in the
handling of the get-mark directive that would cause it to abort with a
parsing error on valid input. While a fix has been submitted upstream
for this, add some extra newlines in a way that will work with both old
and new git versions.
Signed-off-by: Elijah Newren <newren@gmail.com>
Many of the callback functions might only be a single line, and as such
instead of forcing the user to write a full blown program with an import
and everything, let them just specify the body of the callback function
as a command line parameter. Add several tests of this functionality as
well.
Signed-off-by: Elijah Newren <newren@gmail.com>
Add callbacks for:
* filename
simplifies filtering/renaming based solely on filename; return
None to have file removed, or original or new name for file
* message
simplifies tweaking both commit and tag messages; if you want to
tweak just one of the two, use either tag_callback or
commit_callback
* person_name
simplifies tweaking actual names of people without worrying where
they come from (author, committer, or tagger)
* email:
simplifies tweaking email addresses without worrying where they
come from (author, committer, or tagger)
* refname:
simplifies tweaking reference names, regardless of whether they
come from FastExport commit objects, reset objects, or tag objects
Signed-off-by: Elijah Newren <newren@gmail.com>
I want to allow callbacks that could operate on similar pieces of commit
or reset or tag objects (e.g. reference names, email addresses);
restructure the current ones slightly to both allow more general ones to
be added and to make the existing ones slightly clearer.
Signed-off-by: Elijah Newren <newren@gmail.com>
Users may want to run --analyze both before and after filtering in
order to both find the big objects to remove and to verify they are
gone and the overall repository size and filenames are as expected.
As such, aborting and telling the user there's a previous analysis
directory in the way is annoying; just remove it instead.
Signed-off-by: Elijah Newren <newren@gmail.com>
Users may want to run multiple filtering operations, either because it's
easier for them to do it that way, or because they want to combine both
path inclusion and exclusion. For example:
git filter-repo --path subdir
git filter-repo --invert-paths --path subdir/some-big-file
cannot be done in a single step. However, the first filtering operation
would make the repo not look like a clean clone anymore (because it is
not a clean clone anymore), causing the safety check to trigger and
requiring the --force flag. But once we've allowed them to do
repository rewriting, there's no point disallowing further rewriting.
So, write a .git/filter-repo/already_ran file when we run and treat the
presence of that file the same as providing a --force flag.
Signed-off-by: Elijah Newren <newren@gmail.com>
We need to have a list of references to rewrite (which we pass along to
fast-export), but we accepted arbitrary rev-list args. This could
backfire pretty badly if a user tried the wrong but somewhat
straightforward
git filter-repo --invert-paths --path foo bar
instead of the expected
git filter-repo --invert-paths --path foo --path bar
because the passing of 'bar' as a rev-list arg means that fast-export
happily notices that some kind of rev-list was specified but not a
meaningful one so it gives an empty output...and filter-repo interprets
an empty output as "This is a history with no commits" and promptly
deletes everything.
Partial history rewrites aren't yet properly supported anyway (I would
need to stop doing the disconnect-of-origin-remote, the deleting of
references that were filtered-away or otherwise didn't show up, and
the post-rewrite gc+prune). When I add such support, I'll revisit
how these arguments can be specified.
Signed-off-by: Elijah Newren <newren@gmail.com>
Commit.dump() showed up in a profile. Reorganize the code slightly to
build up much of the string into one big chunk before calling
file_.write(); this shaves a few percent off the total runtime. (Where
total runtime is again measured in terms of the
cat fast-export.original | git filter-repo --stdin --dry-run ...
trick mentioned a few commits back.) Trying to make a [c]StringIO
object in order to build more of the string up into a single place
to reduce the number of file_.write() calls was apparently
counter-productive, so only the header before the parents is combined
into a single string.
Signed-off-by: Elijah Newren <newren@gmail.com>
We repeatedly hit the same filenames over and over as we traverse
history, but our expressions for renaming or filtering within the
newname() function are based solely on the filename and thus will always
give the same answer. So record any answer we get and just use it
whenever we hit the same filename again.
If the filtering expressions contain only a single short pathname, this
has no measurable effect, but for several paths (e.g. listing all
builtin/*.c files individually in git.git) it can add up to a few
percent of overall runtime.
Signed-off-by: Elijah Newren <newren@gmail.com>
Repeatedly using non-compiled regexes is rather wasteful of resources.
Pre-compile these and use the cached versions.
I ran
git filter-repo --invert-paths --path configure.ac --dry-run
and then for timing ran
cat .git/filter-repo/fast-export.original | time git filter-repo \
--invert-paths --path configure.ac --dry-run --stdin
on the git.git repository (with tags of blobs and tags of tags deleted).
Comparing the timings before and after this change, I see about a 13%
overall speedup just from caching the regexes.
Signed-off-by: Elijah Newren <newren@gmail.com>
Python wants filenames with underscores instead of hyphens and with a
.py extension. We really want the main file named git-filter-repo, but
we can add a git_filter_repo.py symlink. Doing so dramatically
simplifies the steps needed to import it as a library in external python
scripts.
Signed-off-by: Elijah Newren <newren@gmail.com>
Most filtering operations are not interested in the time that commits
were authored or committed, or when tags were tagged. As such,
translating the string representation of the date into a datetime object
is wasted effort, and causes us to waste more time later as we have to
translate it back into a string.
Instead, provide string_to_date() and date_to_string() functions so that
callers can perform the translation if wanted, and let the normal case
be fast.
Provides a small but noticable speedup when just filtering based on
paths; about a 3.5% improvement in execution time for writing the new
history.
Signed-off-by: Elijah Newren <newren@gmail.com>
We have to ask fast-import for the new names of commits, but doing so
immediately upon dumping out the commit related information requires
context switches and waiting for fast-import to parse and handle more
information. We don't need to know the new name of the commit until we
run across a subsequent commit that referenced it in the commit message
by its old ID.
So, speed things up dramatically by waiting until we need the new name
of the commit message (or the fast-import output pipe we are
communicating with should be getting kind of full) before blocking on
reading new commit hashes.
Signed-off-by: Elijah Newren <newren@gmail.com>
Treat fast_import_pipes more like the other parameters to
FastExportFilter.run(), both for consistency, and because it will allow
us to more easily defer doing blocking reads for new commit names until
we actually need to know the new commit hashes corresponding to old
commit ids.
Signed-off-by: Elijah Newren <newren@gmail.com>
Once we rewrite history, our history will be unrelated to our original
upstream. As such, migrate refs/remotes/origin/* to refs/heads/*
(our sanity check already verified that if both names exist they are
equal; if the user used --force then just delete the remote tracking
branch and leave the local branch as is), and then delete the 'origin'
remote.
This has a few advantages:
* People expect to work with refs/heads/*, not refs/remotes/origin/*,
and will be more likely to write filters based on those.
* People will probably need to push the new history somewhere when
they are done, and it's easier if we have it in refs/heads/* than
if it's under refs/remotes/origin/*.
* It encourages people to use good hygiene and not mix old and new
histories. If users really want, they can push the repo (or parts
thereof) back over their original history, but they should have to
take extra steps to do it instead of just a 'git push'.
Signed-off-by: Elijah Newren <newren@gmail.com>
Apparently, despite the git-fast-import.txt documentation, tagger is
optional for both fast-export and fast-import. I suspect this is
because there are several (old) tags in the linux.git repository that
have no tagger, so if we want to test rewriting linux.git history then
we need to make it optional for filter-repo too.
Signed-off-by: Elijah Newren <newren@gmail.com>
Make it easy for users to search and replace text throughout the
repository history. Instead of inventing some new syntax, reuse the
same syntax used by BFG repo filter's --replace-text option, namely,
a file with one expression per line of the form
[regex:|glob:|literal:]$MATCH_EXPR[==>$REPLACEMENT_EXPR]
Where "$MATCH_EXPR" is by default considered to be literal text, but
could be a regex or a glob if the appropriate prefix is used. Also,
$REPLACEMENT_EXPR defaults to '***REMOVED***' if not specified. If
you want a literal '==>' to be part of your $MATCH_EXPR, then you
must also manually specify a replacement expression instead of taking
the default. Some examples:
sup3rs3kr3t
(replaces 'sup3rs3kr3t' with '***REMOVED***')
HeWhoShallNotBeNamed==>Voldemort
(replaces 'HeWhoShallNotBeNamed' with 'Voldemort')
very==>
(replaces 'very' with the empty string)
regex:(\d{2})/(\d{2})/(\d{4})==>\2/\1/\3
(replaces '05/17/2012' with '17/05/2012', and vice-versa)
The format for regex is as from
re.sub(<pattern>, <repl>, <string>) from
https://docs.python.org/2/library/re.html
The <string> comes from file contents of the repo, and you specify
the <pattern> and <repl>.
glob:Copy*t==>Cartel
(replaces 'Copyright' or 'Copyleft' or 'Copy my st' with 'Cartel')
Signed-off-by: Elijah Newren <newren@gmail.com>
When non-merge commits have files in the _files_tweaked set (they were
modified by a blob or commit callback), they may become empty. However,
new_1st_parent is more accurately named
new_1st_parent_if_would_become_non_merge; it will always be None for
non-merge commits. So we need to get the correct parent.
Signed-off-by: Elijah Newren <newren@gmail.com>
When we only have an output and no input of our own, filter.run() seems
weird to call, especially since it'll only be closing a handle and waiting
for fast-import to finish. Add a finish() synonym for such a case to make
external code callers more legible.
Signed-off-by: Elijah Newren <newren@gmail.com>
This will allow exporting from one repo into a different repo, and
combined with chained RepoFilter instances from commit 81016821a1
(filter-repo: allow chaining of RepoFilter instances, 2019-01-07), will
even allow things like splicing separate repositories together.
Signed-off-by: Elijah Newren <newren@gmail.com>
We do not want to kill fast-import processes unused; it's better
to abort before those processes are created when we know we need to.
Signed-off-by: Elijah Newren <newren@gmail.com>
Allow each instance to be just input or just output so that we can splice
repos together or split one into multiple different repos.
Signed-off-by: Elijah Newren <newren@gmail.com>
If we are using --stdin, it should be okay to import into a bare repo,
but the checks were enforcing that we were in a clone with a packfile.
Relax the check to work within a bare repo as well.
Signed-off-by: Elijah Newren <newren@gmail.com>
If we have blob callbacks, we cannot pass --no-data to fast-export. Also,
with blob callbacks, any file the callback modifies could match the
modification done to the file by a subsequent commit, possibly making the
later commit empty. As such, we keep a record of all filenames modified
(by blob or commit callbacks), and then check all these filenames for all
subsequent commits to see if it causes empty commits. In particular, if
files other than these are modified in a non-merge commit, we know that
the commit will not become empty so we can bypass the empty-pruning
checks.
Signed-off-by: Elijah Newren <newren@gmail.com>
If a commit was a non-merge commit previously, then since we do not do
any kind of blob modifications (or funny parent grafting), there is no
way for a filemodify instruction to introduce the same version of the
file that already existed in the parent, as such the only check we need
to do to determine whether a commit becomes empty is whether
file_changes is empty. Subsequent more expensive checks can be skipped.
Signed-off-by: Elijah Newren <newren@gmail.com>