In commit 509a624 (filter-repo: fix issue with pruning of empty commits,
2019-10-03), it was noted that when the first parent is pruned away,
then we need to generate a corrected list of file changes relative to
the new first parent. Unfortunately, we did not apply our set of file
filters to that new list of file changes, causing us to possibly
introduce many unwanted files from the second parent into the history.
The testcase added at the time was rather lax and totally missed this
problem (which possibly exacerbated the original bug being fixed rather
than helping). Tighten the testcase, and fix the error by filtering the
generated list of file changes.
Signed-off-by: Elijah Newren <newren@gmail.com>
Some of the systems I ran on had a 'python3-coverage' and some had a
'coverage3' program. More were of the latter name, but more
importantly, the upstream tarball only creates the latter name;
apparently the former was just added by some distros. So, switch to the
more official name of the program.
Signed-off-by: Elijah Newren <newren@gmail.com>
It appears that in addition to Windows requiring cwd be a string (and
not a bytestring), it also requires the command line arguments to be
unicode strings. This appears to be a python-on-Windows issue at the
surface (attempts to quote things that assumes the arguments are all
strings), but whether it's solely a python-on-Windows issue or there is
also a deeper Windows issue, we can workaround this brain-damage by
extending the SubprocessWrapper slightly. As with the cwd changes, only
apply this on Windows and not elsewhere because there are perfectly
legitimate reasons to pass non-unicode parameters (e.g. filenames that
are not valid unicode).
Signed-off-by: Elijah Newren <newren@gmail.com>
Unfortunately, it appears that Windows does not allow the 'cwd' argument
of various subprocess calls to be a bytestring. That may be functional
on Windows since Windows-related filesystems are allowed to require that
all file and directory names be valid unicode, but not all platforms
enforce such restrictions. As such, I certainly cannot change
cwd=directory
to
cwd=decode(directory)
because that could break on other platforms (and perhaps even on Windows
if someone is trying to read a non-native filesystem). Instead, create
a SubprocessWrapper class that will always call decode on the cwd
argument before passing along to the real subprocess class. Use these
wrappers on Windows, and do not use them elsewhere.
Signed-off-by: Elijah Newren <newren@gmail.com>
Note that this isn't a version *number* or even the more generalized
version string that folks are used to seeing, but a version hash (or
leading portion thereof).
A few import points:
* These version hashes are not strictly monotonically increasing
values. Like I said, these aren't version numbers. If that
bothers you, read on...
* This scheme has incredibly nice semantics satisfying a pair of
properties that most version schemes would assume are mutually
incompatible:
This scheme works even if the user doesn't have a clone of
filter-repo and doesn't require any build step to inject the
version into the program; it works even if people just download
git-filter-repo.py off GitHub without any of the other sources.
And:
This scheme means that a user is running precisely version X of
the code, with the version not easily faked or misrepresented
when third parties edit the code.
Given the wonderful semantics provided by satisfying this pair of
properties that all other versioning schemes seem to miss out on, I
think I should name this scheme. How about "Semantic Versioning"?
(Hehe...)
* The version hash is super easy to use; I just go to my own clone of
filter-repo and run either:
git show $VERSION_HASH
or
git describe $VERSION_HASH
* A human consumable version might suggest to folks that this software
is something they might frequently use and upgrade. This program
should only be used in exceptional cases (because rewriting history
is not for the faint of heart).
* A human consumable version (i.e. a version number or even the
more relaxed version strings in more common use) might suggest to
folks that they can rely on strict backward compatibility. It's
nice to subtly undercut any such assumption.
* Despite all that, I will make releases (downloadable tarballs with
real version numbers in the tarball name; I'm just going to re-use
whatever version git is released with at the time). But those
version numbers won't be used by the --version option; instead the
version hash will.
Signed-off-by: Elijah Newren <newren@gmail.com>
In order to build the correct tree for a commit, git-fast-import always
takes a list of file changes for a merge commit relative to the first
parent.
When the entire first-parent history of a merge commit is pruned away
and the merge had paths with no difference relative to the first parent
but which differed relative to later parents, then we really need to
generate a new list of file changes in order to have one of those other
parents become the new first parent. An example might help clarify...
Let's say that there is a merge commit, and:
* it resolved differences in pathA between its two parents by taking
the version of pathA from the first parent.
* pathB was added in the history of the second parent (it is not
present in the first parent) and is NOT included in the merge commit
(either being deleted, or via rename treated as deleted and added as
something else)
For this merge commit, neither pathA nor pathB differ from the first
parent, and thus wouldn't appear in the list of file changes shown by
fast-export. However, when our filtering rules determine that the first
parent (and all its parents) should be pruned away, then the second
parent has to become the new first parent of the merge commit. But to
end up with the right files in the merge commit despite using a
different parent, we need a list of file changes that specifies the
changes for both pathA and pathB.
Signed-off-by: Elijah Newren <newren@gmail.com>
Allow folks to periodically update the export of a live repo without
re-exporting from the beginning. This is a performance improvement, but
can also be important for collaboration. For example, for sensitivity
reasons, folks might want to export a subset of a repo and update the
export periodically. While this could be done by just re-exporting the
repository anew each time, there is a risk that the paths used to
specify the wanted subset might need to change in the future; making the
user verify that their paths (including globs or regexes) don't also
pick up anything from history that was previously excluded so that they
don't get a divergent history is not very user friendly. Allowing them
to just export stuff that is new since the last export works much better
for them.
Signed-off-by: Elijah Newren <newren@gmail.com>
Commit 346f2ba891 (filter-repo: make reencoding of commit messages
togglable, 2019-05-11) made reencoding of commit messages togglable but
forgot to add parsing and outputting of the encoding header itself. Add
such ability now.
Signed-off-by: Elijah Newren <newren@gmail.com>
External rewrite tools using filter-repo as a library may want to add
additional objects into the stream. Some examples in t/t9391 did this
using an internal _output field and using syntax that did not seem so
clear. Provide an insert() method for doing this, and convert existing
cases over to it.
Signed-off-by: Elijah Newren <newren@gmail.com>
When we prune a commit for being empty, there is no update to the branch
associated with the commit in the fast-import stream. If the parent
commit had been associated with a different branch, then the branch
associated with the pruned commit would not be updated without
additional measures. In the past, we resolved this by recording that
the branch needed an update in _seen_refs. While this works, it is a
bit more complicated than just issuing an immediate Reset. Also, note
that we need to avoid calling callbacks on that Reset because those
could rename branches (again, if the commit-callback already renamed
once) causing us to not update the intended branch.
There was actually one testcase where the old method didn't work: when a
branch was pruned away to nothing. A testcase accidentally encoded the
wrong behavior, hiding this problem. Fix the testcase to check for
correct behavior.
Signed-off-by: Elijah Newren <newren@gmail.com>
Add a flag allowing for specifying a file filled with blob-ids which
will be stripped from the repository.
Signed-off-by: Elijah Newren <newren@gmail.com>
Fix a few issues and add a token testcase for partial repo filtering.
Add a note about how I think this is not a particularly interesting or
core usecase for filter-repo, even if I have put some good effort into
the fast-export side to ensure it worked. If there is a core usecase
that can be addressed without causing usability problems (particularly
the "don't mix old and new history" edict for normal rewrites), then
I'll be happy to add more testcases, document it better, etc.
Signed-off-by: Elijah Newren <newren@gmail.com>
Make several fixes around --source and --target:
* Explain steps we skip when source or target locations are specified
* Only write reports to the target directory, never the source
* Query target git repo for final ref values, not the source
* Make sure --debug messages avoid throwing TypeErrors due to mixing
strings and bytes
* Make sure to include entries in ref-map that weren't in the original
target repo
* Don't:
* worry about mixing old and new history (i.e. nuking refs
that weren't updated, expiring reflogs, gc'ing)
* attempt to map refs/remotes/origin/* -> refs/heads/*
* disconnect origin remote
* Continue (but only in target repo):
* fresh-clone sanity checks
* writing replace refs
* doing a 'git reset --hard'
Signed-off-by: Elijah Newren <newren@gmail.com>
Add a flag for filtering out blob based on their size, and allow the
size to be specified using 'K', 'M', or 'G' suffixes.
Signed-off-by: Elijah Newren <newren@gmail.com>
Imperative form sounds better than --empty-pruning and
--degenerate-pruning, and it probably works better with command line
completion.
Signed-off-by: Elijah Newren <newren@gmail.com>
The reset directive can specify a commit hash for the 'from' directive,
which can be used to reset to a specify commit, or, if the hash is all
zeros, then it can be used to delete the ref. Support such operations.
Signed-off-by: Elijah Newren <newren@gmail.com>
For other programs importing git-filter-repo as a library and passing a
blob, commit, tag, or reset callback to RepoFilter, pass a second
parameter to these functions with extra metadata they might find useful.
For simplicity of implementation, this technically changes the calling
signature of the --*-callback functions passed on the command line, but
we hide that behind a _do_not_use_this_variable parameter for now, leave
it undocumented, and encourage folks who want to use it to write an
actual python program that imports git-filter-repo. In the future, we
may modify the --*-callback functions to not pass this extra parameter,
or if it is deemed sufficiently useful, then we'll rename the second
parameter and document it.
As already noted in our API compatibilty caveat near the top of
git-filter-repo, I am not guaranteeing API backwards compatibility.
That especially applies to this metadata argument, other than the fact
that it'll be a dict mapping strings to some kind of value. I might add
more keys, rename them, change the corresponding value, or even remove
keys that used to be part of metadata.
Signed-off-by: Elijah Newren <newren@gmail.com>
Location of filtering logic was previously split in a confusing fashion
between FastExportFilter and RepoFilter. Move all filtering logic from
FastExportFilter into RepoFilter, and rename the former to
FastExportParser to reflect this change.
One downside of this change is that FastExportParser's _parse_commit
holds two pieces of information (orig_parents and had_file_changes)
which are not part of the commit object but which are now needed by
RepoFilter. Adding those bits of info to the commit object does not
make sense, so for now we pass an auxiliary dict with the
commit_callback that has these two fields. This information is not
passed along to external commit_callbacks passed to RepoFilter, though,
which seems suboptimal. To be fair, though, commit_callbacks to
RepoFilter never had access to this information so this is not a new
shortcoming, it just seems more apparent now.
Signed-off-by: Elijah Newren <newren@gmail.com>
I introduced this over a decade ago thinking it would come in handy in
some special case, and the only place I used it was in a testcase that
existed almost solely to increase code coverage. Modify the testcase to
instead demonstrate how it is trivial to get the effects of the
everything_callback without it being present.
Signed-off-by: Elijah Newren <newren@gmail.com>
This allows the user to put a whole bunch of paths they want to keep (or
want to remove) in a file and then just provide the path to it. They
can also use globs or regexes (similar to --replace-text) and can also
do renames. In fact, this allows regex renames, despite the fact that I
never added a --path-rename-regex option.
Signed-off-by: Elijah Newren <newren@gmail.com>
Using an exact path (file or directory) for --path-rename instead of a
prefix removes an ugly caveat from the documentation, makes it operate
similarly to --path, and will make it easier to reuse common code when I
add the --paths-from-file option. Switch over, and replace the
startswith() check by a call to filename_matches().
Signed-off-by: Elijah Newren <newren@gmail.com>
This new flag allows people to filter files solely based on their
basename rather than on their full path within the repo, making it
easier to e.g. remove all .DS_Store files or keep all README.md
files.
Signed-off-by: Elijah Newren <newren@gmail.com>
This class only represents one FileChange, so fix the misnomer and make
it clearer to others the purpose of this object.
Signed-off-by: Elijah Newren <newren@gmail.com>
This adds the ability to automatically add new replacement refs for each
rewritten commit (as well as delete or update replacement refs that
existed before the run). This will allow users to use either new or old
commit hashes to reference commits locally, though old commit hashes
will need to be unabbreviated. The only requirement for this to work,
is that the person who does the rewrite also needs to push the replace
refs up where other users can grab them, and users who want to use them
need to modify their fetch refspecs to grab the replace refs.
However, other tools external to git may not understand replace refs...
Tools like Gerrit and GitHub apparently do not yet natively understand
replace refs. Trying to view "commits" by the replacement ref will
yield various forms of "Not Found" in each tool. One has to instead try
to view it as a branch with an odd name (including "refs/replace/"), and
often branches are accessed via a different URL style than commits so it
becomes very non-obvious to users how to access the info associated with
an old commit hash.
* In Gerrit, instead of being able to search on the sha1sum or use a
pre-defined URL to search and auto-redirect to the appropriate code
review with
https://gerrit.SITE.COM/#/q/${OLD_SHA1SUM},n,z
one instead has to have a special plugin and go to a URL like
https://gerrit.SITE.COM/plugins/gitiles/ORG/REPO/+/refs/replace/${OLD_SHA1SUM}
but then the user isn't shown the actual code review and will need
to guess which link to click on to get to it (and it'll only be
there if the user included a Change-Id in the commit message).
* In GitHub, instead of being able to go to a URL like
https://github.SITE.COM/ORG/REPO/commit/${OLD_SHA1SUM}
one instead has to navigate based on branch using
https://github.SITE.COM/ORG/REPO/tree/refs/replace/${OLD_SHA1SUM}
but that will show a listing of commits instead of information about
a specific commit; the user has to manually click on the first commit
to get to the desired location.
For now, providing replace refs at least allows users to access
information locally using old IDs; perhaps in time as other external
tools will gain a better understanding of how to use replace refs, the
barrier to history rewrites will decrease enough that big projects that
really need it (e.g. those that have committed many sins by commiting
stupidly large useless binary blobs) can at least seriously contemplate
the undertaking. History rewrites will always have some drawbacks and
pain associated with them, as they should, but when warranted it's nice
to have transition plans that are more smooth than a massive flag day.
Signed-off-by: Elijah Newren <newren@gmail.com>
We have a good default for pruning of empty commits and degenerate merge
commits: only pruning such commits that didn't start out that way (i.e.
that couldn't intentionally have been empty or degenerate). However,
users may have reasons to want to aggressively prune such commits (maybe
they used BFG repo filter or filter-branch previously and have lots of
cruft commits that they want remoed), and we may as well allow them to
specify that they don't want pruning too, just to be flexible.
Signed-off-by: Elijah Newren <newren@gmail.com>
fast-import syntax declares how to specify the parents of a commit with
'from' and possibly 'merge' directives, but it oddly also allows parents
to be implicitly specified via branch name. The documentation is easy
to misread:
"Omitting the from command in the first commit of a new branch will
cause fast-import to create that commit with no ancestor."
Note that the "in the first commit of a new branch" is key here. It is
reinforced later in the document with:
"Omitting the from command on existing branches is usually desired, as
the current commit on that branch is automatically assumed to be the
first ancestor of the new commit."
Desirability of operating this way aside, this raises an interesting
question: what if you only have one branch in some repository, but that
branch has more than one root commit? How does one use the fast-import
format to import such a repository? The fast-import documentation
doesn't state as far as I can tell, but using a 'reset' directive
without providing a 'from' reference for it is the way to go.
Modify filter-repo to understand implicit 'from' commits, and to
appropriately issue 'reset' directives when we need additional root
commits.
Signed-off-by: Elijah Newren <newren@gmail.com>
This is by far the largest python3 change; it consists basically of
* using b'<str>' instead of '<str>' in lots of places
* adding a .encode() if we really do work with a string but need to
get it converted to a bytestring
* replace uses of .format() with interpolation via the '%' operator,
since bytestrings don't have a .format() method.
Signed-off-by: Elijah Newren <newren@gmail.com>
While the underlying fast-export and fast-import streams explicitly
separate 'from' commit (first parent) and 'merge' commits (all other
parents), foisting that separation into the Commit object for
filter-repo forces additional places in the code to deal with that
distinction. It results in less clear code, and especially does not
make sense to push upon folks who may want to use filter-repo as a
library.
Signed-off-by: Elijah Newren <newren@gmail.com>
Use UTF-8 chars in user names, filenames, branch names, tag names, and
file contents. Also include invalid UTF-8 in file contents; should be
able to handle binary data.
Signed-off-by: Elijah Newren <newren@gmail.com>
The sorting order of entries written to files in the analysis directory
didn't specify a secondary sort, thus making the order dependent on the
random-ish sorting order of dictionaries and making it inconsistent
between python versions. While the secondary order didn't matter much,
having a defined order makes it slightly easier to define a single
testcase that can work across versions.
Signed-off-by: Elijah Newren <newren@gmail.com>
Assuming filter-repo will be merged into git.git, use "git" for the
TEXTDOMAIN, and assume its build system will replace "@@LOCALEDIR@@"
appropriately.
Note that the xgettext command used to grab string translations is
nearly identical to the one for C files in git.git; just use
--language=python instead and add --join-existing to avoid overwriting
the po/git.pot file. In other words, use the command:
xgettext -o../git/po/git.pot --join-existing --force-po \
--add-comments=TRANSLATORS: \
--msgid-bugs-address="Git Mailing List <git@vger.kernel.org>" \
--from-code=UTF-8 --language=python \
--keyword=_ --keyword=N_ --keyword="Q_:1,2" \
git-filter-repo
To create or update the translation, go to git.git/po and run either of:
msginit --locale=XX
msgmerge --add-location --backup=off -U XX.po git.pot
Once you've updated the translation, within git.git just build as
normal. That's all that's needed.
Signed-off-by: Elijah Newren <newren@gmail.com>
The AncestryGraph setup assumed we had previously seen all commits which
would be used as parents; that interacted badly with doing an
incremental import. Add a function which can be used to record external
commits, each of which we'll treat like a root commit (i.e. depth 1 and
having no parents of its own). Add a test to prevent regressions.
Signed-off-by: Elijah Newren <newren@gmail.com>