credit goes to @Michagogo related to #44
5.9 KiB
Gitian
Read about the project goals at the project home page.
This package can do a deterministic build of a package inside a VM.
Deterministic build inside a VM
This performs a build inside a VM, with deterministic inputs and outputs. If the build script takes care of all sources of non-determinism (mostly caused by timestamps), the result will always be the same. This allows multiple independent verifiers to sign a binary with the assurance that it really came from the source they reviewed.
Prerequisites:
Gentoo:
layman -a luke-jr # needed for vmbuilder
sudo emerge dev-vcs/git net-misc/apt-cacher-ng app-emulation/vmbuilder dev-lang/ruby
sudo emerge app-emulation/qemu
export KVM=qemu-system-x86_64
Ubuntu:
This pulls in all pre-requisites for KVM building on Ubuntu:
sudo apt-get install git apache2 apt-cacher-ng python-vm-builder ruby qemu-utils
If you'd like to use LXC mode instead, install it as follows:
sudo apt-get install lxc
Debian:
See Ubuntu, and also run the following on Debian Jessie or newer:
sudo apt-get install ubuntu-archive-keyring
On Debian Wheezy you run the same command, but you must first add backports to your system, because the package is only available in wheezy-backports.
OSX with MacPorts:
sudo port install ruby coreutils
export PATH=$PATH:/opt/local/libexec/gnubin # Needed for sha256sum
VirtualBox:
Install virtualbox from http://www.virtualbox.org, and make sure VBoxManage
is in your $PATH
.
Create the base VM for use in further builds
NOTE: requires sudo
, please review the script
KVM
bin/make-base-vm
bin/make-base-vm --arch i386
LXC
bin/make-base-vm --lxc
bin/make-base-vm --lxc --arch i386
Set the USE_LXC
environment variable to use LXC
instead of KVM
:
export USE_LXC=1
VirtualBox
Command-line VBoxManage
must be in your $PATH
.
Setup:
make-base-vm
cannot yet make VirtualBox virtual machines ( patches welcome, it should be possible to use VBoxManage
, boot-from-network Linux images and PXE booting to do it). So you must either get or manually create VirtualBox machines that:
- Are named
Gitian-<suite>-<arch>
-- e.g. Gitian-lucid-i386 for a 32-bit, Ubuntu 10 machine. - Have a booted-up snapshot named
Gitian-Clean
. The build script resets the VM to that snapshot to get reproducible builds. - Has the VM's NAT networking setup to forward port
localhost:2223
on the host machine to port22
of the VM; e.g.:
VBoxManage modifyvm Gitian-lucid-i386 --natpf1 "guestssh,tcp,,2223,,22"
The final setup needed is to create an ssh
key that will be used to login to the virtual machine:
ssh-keygen -t dsa -f var/id_dsa -N ""
ssh -p 2223 ubuntu@localhost 'mkdir -p .ssh && chmod 700 .ssh && cat >> .ssh/authorized_keys' < var/id_dsa.pub
Then log into the vm and copy the ssh
keys to root's authorized_keys
file.
ssh -p 2223 ubuntu@localhost
# Now in the vm
sudo bash
mkdir -p .ssh && chmod 700 .ssh && cat ~ubuntu/.ssh/authorized_keys >> .ssh/authorized_keys
Set the USE_VBOX
environment variable to use VBOX
instead of KVM
:
export USE_VBOX=1
Sanity-testing
If you have everything set-up properly, you should be able to:
PATH=$PATH:$(pwd)/libexec
make-clean-vm --suite lucid --arch i386
# For LXC:
LXC_ARCH=i386 LXC_SUITE=lucid on-target ls -la
# For KVM:
start-target 32 lucid-i386 &
# wait a few seconds for VM to start
on-target ls -la
stop-target
Building
Copy any additional build inputs into a directory named inputs.
Then execute the build using a YAML
description file (can be run as non-root):
export USE_LXC=1 # LXC only
bin/gbuild <package>.yml
or if you need to specify a commit for one of the git remotes:
bin/gbuild --commit <dir>=<hash> <package>.yml
The resulting report will appear in result/<package>-res.yml
To sign the result, perform:
bin/gsign --signer <signer> --release <release-name> <package>.yml
Where <signer>
is your signing PGP key ID and <release-name>
is the name for the current release. This will put the result and signature in the sigs/<package>/<release-name>
. The sigs/<package>
directory can be managed through git to coordinate multiple signers.
After you've merged everybody's signatures, verify them:
bin/gverify --release <release-name> <package>.yml
Poking around
- Log files are captured to the var directory
- You can run the utilities in libexec by running
PATH="libexec:$PATH"
- To start the target VM run
start-target 32 lucid-i386
orstart-target 64 lucid-amd64
- To ssh into the target run
on-target
oron-target -u root
- On the target, the build directory contains the code as it is compiled and install contains intermediate libraries
- By convention, the script in
<package>.yml
starts with any environment setup you would need to manually compile things on the target
TODO:
- disable sudo in target, just in case of a hypervisor exploit
- tar and other archive timestamp setter
LXC tips
bin/gbuild
runs lxc-execute
or lxc-start
, which may require root. If you are in the admin group, you can add the following sudoers line to prevent asking for the password every time:
%admin ALL=NOPASSWD: /usr/bin/lxc-execute
%admin ALL=NOPASSWD: /usr/bin/lxc-start
Right now lxc-start
is the default, but you can force lxc-execute
(useful for Ubuntu 14.04) with:
export LXC_EXECUTE=lxc-execute
Recent distributions allow lxc-execute / lxc-start to be run by non-priviledged users, so you might be able to rip-out the sudo
calls in libexec/*
.
If you have a runaway lxc-start
command, just use kill -9
on it.
The machine configuration requires access to br0 and assumes that the host address is 10.0.2.2
:
sudo brctl addbr br0
sudo ifconfig br0 10.0.2.2/24 up
Tests
Not very extensive, currently.
python -m unittest discover test