This class only represents one FileChange, so fix the misnomer and make
it clearer to others the purpose of this object.
Signed-off-by: Elijah Newren <newren@gmail.com>
For most repos, --fake-missing-tagger will be a no-op. But if a repo
out there has a missing tagger, then fast-import will choke on it unless
one is inserted; ask fast-export to do that.
Signed-off-by: Elijah Newren <newren@gmail.com>
This adds the ability to automatically add new replacement refs for each
rewritten commit (as well as delete or update replacement refs that
existed before the run). This will allow users to use either new or old
commit hashes to reference commits locally, though old commit hashes
will need to be unabbreviated. The only requirement for this to work,
is that the person who does the rewrite also needs to push the replace
refs up where other users can grab them, and users who want to use them
need to modify their fetch refspecs to grab the replace refs.
However, other tools external to git may not understand replace refs...
Tools like Gerrit and GitHub apparently do not yet natively understand
replace refs. Trying to view "commits" by the replacement ref will
yield various forms of "Not Found" in each tool. One has to instead try
to view it as a branch with an odd name (including "refs/replace/"), and
often branches are accessed via a different URL style than commits so it
becomes very non-obvious to users how to access the info associated with
an old commit hash.
* In Gerrit, instead of being able to search on the sha1sum or use a
pre-defined URL to search and auto-redirect to the appropriate code
review with
https://gerrit.SITE.COM/#/q/${OLD_SHA1SUM},n,z
one instead has to have a special plugin and go to a URL like
https://gerrit.SITE.COM/plugins/gitiles/ORG/REPO/+/refs/replace/${OLD_SHA1SUM}
but then the user isn't shown the actual code review and will need
to guess which link to click on to get to it (and it'll only be
there if the user included a Change-Id in the commit message).
* In GitHub, instead of being able to go to a URL like
https://github.SITE.COM/ORG/REPO/commit/${OLD_SHA1SUM}
one instead has to navigate based on branch using
https://github.SITE.COM/ORG/REPO/tree/refs/replace/${OLD_SHA1SUM}
but that will show a listing of commits instead of information about
a specific commit; the user has to manually click on the first commit
to get to the desired location.
For now, providing replace refs at least allows users to access
information locally using old IDs; perhaps in time as other external
tools will gain a better understanding of how to use replace refs, the
barrier to history rewrites will decrease enough that big projects that
really need it (e.g. those that have committed many sins by commiting
stupidly large useless binary blobs) can at least seriously contemplate
the undertaking. History rewrites will always have some drawbacks and
pain associated with them, as they should, but when warranted it's nice
to have transition plans that are more smooth than a massive flag day.
Signed-off-by: Elijah Newren <newren@gmail.com>
Keeping empty pruning as a single section likely makes users only think
about pruning of non-merge commits which become empty. Since merge
commits can lose parents or become degenerate, it is worth creating a
second section for this; besides, that matches the separate options we
provide to users to control the features.
Signed-off-by: Elijah Newren <newren@gmail.com>
We have a good default for pruning of empty commits and degenerate merge
commits: only pruning such commits that didn't start out that way (i.e.
that couldn't intentionally have been empty or degenerate). However,
users may have reasons to want to aggressively prune such commits (maybe
they used BFG repo filter or filter-branch previously and have lots of
cruft commits that they want remoed), and we may as well allow them to
specify that they don't want pruning too, just to be flexible.
Signed-off-by: Elijah Newren <newren@gmail.com>
If a commit was a merge in the original repo, and its ancestors on at
least one side have all been filtered away, and the commit has no
filechanges relative to its remaining parent (if any), then this commit
should be pruned. We had a small logic error preventing such pruning;
fix it by checking len(parents) instead of len(orig_parents).
Signed-off-by: Elijah Newren <newren@gmail.com>
fast-import syntax declares how to specify the parents of a commit with
'from' and possibly 'merge' directives, but it oddly also allows parents
to be implicitly specified via branch name. The documentation is easy
to misread:
"Omitting the from command in the first commit of a new branch will
cause fast-import to create that commit with no ancestor."
Note that the "in the first commit of a new branch" is key here. It is
reinforced later in the document with:
"Omitting the from command on existing branches is usually desired, as
the current commit on that branch is automatically assumed to be the
first ancestor of the new commit."
Desirability of operating this way aside, this raises an interesting
question: what if you only have one branch in some repository, but that
branch has more than one root commit? How does one use the fast-import
format to import such a repository? The fast-import documentation
doesn't state as far as I can tell, but using a 'reset' directive
without providing a 'from' reference for it is the way to go.
Modify filter-repo to understand implicit 'from' commits, and to
appropriately issue 'reset' directives when we need additional root
commits.
Signed-off-by: Elijah Newren <newren@gmail.com>
Notable items:
* We use bytestrings _everywhere_. This is incredibly annoying to
me, as I think users will be tempted to use "normal" strings in
callback functions and get surprised when things compare as
unequal, but I did like 3-4 python3 conversions with different
amounts in bytestrings and regular strings, and I always hit
real world repositories with alternate encodings on user names
and commit messages (despite commit messages not necessarily
having a special 'encoding' field). Further, I was always
risking munging data the user didn't want by trying to 'decode'
the bytestrings into unicode, and I was probably slowing down
performance. So, in the end I gave up and everything must be a
bytestring.
* The performance of the python2 version of filter-repo drifted
slightly over time with additional features and more robust
checking (particularly the become-empty and become-degenerate
pruning), though largely still providing the same performance
as I highlighted in my BFG/filter-branch/filter-repo comparison.
There certainly weren't any factors of 2 difference. A pleasant
surprise was that the python2->python3 conversion appears to have
only made a difference of a couple percent to performance and
some tests were faster and others slower than the python2 version.
So performance seems to be a wash.
* The individual commits on python3-conversion do not work
independently, but rather demonstrate separate aspects of what work
was needed in the large conversion to python3.
Signed-off-by: Elijah Newren <newren@gmail.com>
This is by far the largest python3 change; it consists basically of
* using b'<str>' instead of '<str>' in lots of places
* adding a .encode() if we really do work with a string but need to
get it converted to a bytestring
* replace uses of .format() with interpolation via the '%' operator,
since bytestrings don't have a .format() method.
Signed-off-by: Elijah Newren <newren@gmail.com>
Unlike how str works, if we grab an array index of a bytestr we get an
integer (corresponding to the ASCII value) instead of a bytestr of
length 1. Adjust code accordingly.
Signed-off-by: Elijah Newren <newren@gmail.com>
python3 forces a couple issues for us with the conversion of globs to
regexes:
* fnmatch.translate() will ONLY operate on unicode strings, not
bytestrings. Super lame.
* newer versions of python3 modified the regex style used by
fnmatch.translate() causing us to need extra logic to 'fixup'
the regex into the form we want.
Split the code for translating the glob to a regex out into a separate
function which now houses more complicated logic to handle these extra
conditions.
Signed-off-by: Elijah Newren <newren@gmail.com>
We need a function to transform byte strings into unicode strings for
printing error messages and occasional other uses.
Signed-off-by: Elijah Newren <newren@gmail.com>
Commit ca32c5d9afe2 ("filter-repo: workaround python<2.7.9 exec bug",
2019-04-30) put in a workaround for python versions prior to 2.7.9, but
which was incompatible with python3. Revert it as one step towards
migrating to python3.
Signed-off-by: Elijah Newren <newren@gmail.com>
Python issue 21591 will cause SyntaxError messages to by thrown if using
python versions prior to 2.7.9. Use the workaround identified in the
bug report: use the exec statement instead of the exec function, even if
this will need to be reverted for python3.
Signed-off-by: Elijah Newren <newren@gmail.com>
Extra LFs are permitted in git-fast-import syntax, and they serve to
make it easier to read the stream (from --dry-run or --debug), if they
are so inclined.
Signed-off-by: Elijah Newren <newren@gmail.com>
When we invoked the 'ls' command of fast-import, we just passed the
filename as-is. That will work for most filenames, but some have to
be quoted. Make sure we do so.
Signed-off-by: Elijah Newren <newren@gmail.com>
While the underlying fast-export and fast-import streams explicitly
separate 'from' commit (first parent) and 'merge' commits (all other
parents), foisting that separation into the Commit object for
filter-repo forces additional places in the code to deal with that
distinction. It results in less clear code, and especially does not
make sense to push upon folks who may want to use filter-repo as a
library.
Signed-off-by: Elijah Newren <newren@gmail.com>
Use UTF-8 chars in user names, filenames, branch names, tag names, and
file contents. Also include invalid UTF-8 in file contents; should be
able to handle binary data.
Signed-off-by: Elijah Newren <newren@gmail.com>
The sorting order of entries written to files in the analysis directory
didn't specify a secondary sort, thus making the order dependent on the
random-ish sorting order of dictionaries and making it inconsistent
between python versions. While the secondary order didn't matter much,
having a defined order makes it slightly easier to define a single
testcase that can work across versions.
Signed-off-by: Elijah Newren <newren@gmail.com>
Assuming filter-repo will be merged into git.git, use "git" for the
TEXTDOMAIN, and assume its build system will replace "@@LOCALEDIR@@"
appropriately.
Note that the xgettext command used to grab string translations is
nearly identical to the one for C files in git.git; just use
--language=python instead and add --join-existing to avoid overwriting
the po/git.pot file. In other words, use the command:
xgettext -o../git/po/git.pot --join-existing --force-po \
--add-comments=TRANSLATORS: \
--msgid-bugs-address="Git Mailing List <git@vger.kernel.org>" \
--from-code=UTF-8 --language=python \
--keyword=_ --keyword=N_ --keyword="Q_:1,2" \
git-filter-repo
To create or update the translation, go to git.git/po and run either of:
msginit --locale=XX
msgmerge --add-location --backup=off -U XX.po git.pot
Once you've updated the translation, within git.git just build as
normal. That's all that's needed.
Signed-off-by: Elijah Newren <newren@gmail.com>
Over a decade ago, I added code to deal with splitting and splicing
repositories where you weren't always dealing with first parents and
linear histories, and in particular where the mainline tended to be the
second parent (because there was no integrator or special central
gatekeeper like gerrit or github; instead, everyone pushed directly to
the main repository after locally testing, and integration happened via
everyone running 'git pull'). When attempting to splice repositories
the fact that fast-export always gave changes relative to the first
parent caused some grief with my splitting and splicing efforts.
It has been over a decade, I don't know of a good testcase of this
functionality separate from the live repositories I lost access to over
six years ago, git-subtree was released in the meantime which I'm
certain handled the task better, git-fast-export since gained a
--full-tree option which might have provided a better way to attack the
problem (though with splicing repos you often want work with additive
changes rather than recreating from scratch), and I just don't
quite understand the code anymore anyway. I think it had some
fundamental limitations that I knew my usecase avoided, but I don't
remember the details (and I'm not certain if this is true).
Even though code coverage hits all but one of the lines, I'd rather
rewrite any needed functionality if the usecase arises, and in view of
what facilities exist today rather than what I was working with a decade
ago. So, just nuke this code.
Signed-off-by: Elijah Newren <newren@gmail.com>
The original idea was to add --path-rename-(glob|regex) options, but
I like the general flexibility of --filename-callback better for
special cases and keeping the number of command line options at least
slightly in check.
Signed-off-by: Elijah Newren <newren@gmail.com>
There are several lines equivalent to BUG() calls in git that are
supposed to be unreachable, and which exist just to make debugging the
fundamental system problem or refactoring of the code slightly easier by
trying to give a more immediate notification of a problem. If these
error cases are ever hit and happen to be wrong, then the individual
will at worst get a stacktrace and the program will abort...but that
might arguably be even more helpful. Since there is no harm in avoiding
the work of finding ways to break the system to force these lines to be
covered, simply exclude them from line coverage counting.
Signed-off-by: Elijah Newren <newren@gmail.com>