Funsize hacking


The idea of using a service which can generate partial updates for Firefox has been around for years. We actually used to have a server called Prometheus that was responsible for generating updates for nightly builds and the generation was done as a separate process from actual builds.

Scaling that solution wasn't easy and we switched to build-time update generation. Generating updates as a part of builds helped with load distribution, but lacked of flexibility: there is no easy way to generate updates after the build, because the update generation process is directly tied to the build or repack process.

Funsize willl solve the problems listed above: to distribute load and to be flexible.

Last year Anhad started and Mihai continued working on this project. They have done a great job and created a solution that can easily be scaled.

Funsize is split into several pieces:

  • REST API fronted powered by Flask. It's responsible for accepting partial generation requests, forwarding them to the queue and returning generated partials.
  • Celery-based workers to generate partial updates and upload them to S3.
  • SQS or RabbitMQ to coordinate Celery workers.

One of the biggest gains of Funsize is that it uses a global cache to speed up partial generation. For example, after we build an en-US Windows build, we ask Funsize to generate a partial. Then a swarm of L10N repacks (almost a hundred of them per platform) tries to do a similar job. Every single one asks for a partial update. All L10N builds have something in common, and xul.dll is one of the biggest files. Since the files are identical there is no reason to not reuse the previously generated binary patch for that file. Repeat 100 times for multiple files. PROFIT!

The first prototype of Funsize lives at github. If you are interested in hacking, read the docs on how to set up your developer environment. If you don't have an AWS account, it will use a local cache.

Note: this prototype may be redesigned and switch to using TaskCluster. Taskcluster is going to simplify the initial design and reduce dependency on always online infrastructure.

Deploying your code from github to AWS Elastic Beanstalk using Travis

I have been playing with Funsize a lot recently. One of the goals was iterating faster:

I have hit some challenges with both Travis and Elastic Beanstalk.

The first challenge was to run the integration (actually end-to-end) tests in the same environment. Funsize uses Docker for both hacking and production environments. Unfortunately it's not possible to create Docker images as a part of Travis job (there is a option to run jobs inside Docker, but this is a different beast).

A simple bash script works around this problem. It starts all services we need in background and runs the end-to-end tests. The end-to-end test asks Funsize to generate several partial MAR files, downloads identical files from Mozilla's FTP server and compares their content skipping the cryptographic signature (Funsize does not sign MAR files).

The next challenge was deploying the code. We use Elastic Beanstalk as convenient way to run simple services. There is a plan to use something else for Funsize, but at the moment it's Elastic Beanstalk.

Travis has support for Elastic Beanstalk, but it's still experimental and at the moment of writing this post there were no documentation on the official website. The .travis.yml file looks straight forward and worked fine. The only minor issue I hit was long commit message.

# .travis.yml snippet
    - provider: elasticbeanstalk
      app: funsize # Elastic Beanstalk app name
      env: funsize-dev-rail # Elastic Beanstalk env name
      bucket_name: elasticbeanstalk-us-east-1-314336048151 # S3 bucket used by Elastic Beanstalk
      region: us-east-1
        secure: "encrypted key id"
        secure: "encrypted key"
          repo: rail/build-funsize # Deploy only using my user repo for now
          all_branches: true
          # deploy only if particular jobs in the job matrix passes, not any
          condition: $FUNSIZE_S3_UPLOAD_BUCKET = mozilla-releng-funsize-travis

Having the credentials in a public version control system, even if they are encrypted, makes me very nervous. To minimize possible harm in case something goes wrong I created a separate user in AWS IAM. I couldn't find any decent docs on what permissions a user should have to be able to deploy something to Elastic Beanstalk. It took a while to figure out the this minimal set of permissions. Even with these permissions the user looks very powerful with limited access to EB, S3, EC2, Auto Scaling and CloudFormation.

Conclusion: using Travis for Elastic Beanstalk deployments is quite stable and easy to use (after the initial setup) unless you are paranoid about some encrypted credentials being available on github.

Firefox builds are way cheaper now!

Releng has been successfully reducing the Amazon bill recently. We managed to drop the bill from $115K to $75K per month in February.

To make this happen we switched to a cheaper instance type, started using spot instances for regular builds and started bidding for spot instances smarter. Introducing the jacuzzi approach reduced the load by reducing build times.

More details below.


When we fist tried to switch form m3.xlarge ($0.45 per hour) to c3.xlarge ($0.30 per hour) we hit an interesting issue when ruby won't execute anything -- segmentation fault. It turned out that using paravirtual kernels on c3 instance type is not a good idea since this instance type "prefers" HVM virtualization unlike the m3 instance types.

Massimo did a great job and ported our AMIs from PV to HVM.

This switch from PV to HVM went pretty smoothly except the fact that we had to add a swap file because linking libxul requires a lot of memory and the existing 7.5G wasn't enough.

This transition saved us more or less 1/3 of the build pool bill.

Smarter spot bidding

We used to bid blindly:

- I want this many spot instances in this availability zone and this is my maximum price. Bye!

- But the current price is soooo high! ... Hello? Are you there?.. Sigh...

- Hmm, where are my spot instances?! I want twice as many spot instances in this zone and this is my maximum price. Bye!

Since we switched to a much smarter way to bid for spot instances we improved our responsiveness to the load so much that we had to slow down our instance start ramp up. :)

As a part of this transition we reduced the amount of on-demand builders from 400 to 100!

Additionally, now we can use different instance types for builders and sometimes get good prices for the c3.2xlarge instance type.

Less EBS

As a part of the s/m3.xlarge/c3.xlarge/ transition we also introduced a couple other improvements: * Reduced EBS storage use * Started using SSD instance storage for try builds. All your try builds are on SSDs now! Using instance storage is not an easy thing, so we had to re-invent our own wheel to format/mount the storage on boot.

Using DNS to query AWS

DNS is hard. Twisted is hard. AWS is easy. :)

At Mozilla Releng we use EC2 a lot. DNS has always been one of the issues -- one always wants to ssh/vnc to a specific VM to debug issues. Our Puppet infrastructure requires proper forward and reverse DNS entries to generate an SSL certificate.

Before we switched to PuppetAgain we weren't bothering ourselves to add VMs to DNS, and used to use a script to generate an /etc/hosts style file to simplify name resolution.

After adding spot instances into the equation we had to switch to a tricky model when we pre-create EC2 network interfaces in advance, add the corresponding IP addresses to DNS and tag the interfaces so our AMIs can use that information to set up their hostnames, etc.

This DNS requirement makes some things very inflexible. One has wait 10-20 minutes for DNS propagation. Even though we can use API to add new entries, cleaning up old ones has been always tricky.

During one my the 1x1s with catlee, we were brainstorming how we can get rid of DNS management and still be able to reach the VMs easily, we came to a simple idea to invent our own DNS server. Yay!

I wrote a simple DNS server using Twisted. It uses boto to query AWS and generate responses. The initial version is pretty simple, has a lot of hard coded values (like the port, log file, etc), has some issues with running boto async (yay defer.execute()), but it does addresses some of our issues above.

Some useful examples:

$ dig -p 1253 @localhost

# use wildcards
$ dig -p 1253 @localhost *-linux64-ec2-010.*
;; ANSWER SECTION: 600 IN A 600 IN A 600 IN A 600 IN A

# use instance ID
$ dig -p 1253 @localhost i-b462f595

# use tags
$ dig -p 1253 @localhost tag:moz-loaned-to=j*,moz-type=tst*

# do something useful, ping all loaned slaves
fping `dig -p 1253 @localhost tag:moz-loaned-to=* +short` is alive is alive is alive is alive is unreachable is unreachable

EC2 spot instances experiments

To optimize Mozilla's AWS bills I recently started playing with EC2 spot instances. They can be much cheaper than regular on-demand or reserved instances, but they can be killed "by price" anytime if somebody bids higher than you. The idea to use cheaper instances has been around for a while, but the fact that a job can be interrupted was one of the psychological stoppers for us.

We decided to try how the spot instances behave as unit tests slaves in order to simplify changes required to plug them into our infra. The spot request API is a bit different than on-demand API - there is no way to re-use existing disk volumes, only snapshots (read AMI snapshots). We use Puppet to setup everything on the system as long as you have proper DNS and SSL certificates in place. With minimal changes I managed to get AMIs self-bootstrapping themselves in a couple minutes after boot. Under the hood, we're pre-creating EC2 network interfaces and entries in DNS so that puppet and buildbot work just like the rest of our infra.

Another challenge was to avoid losing interrupted jobs. It turned out that bhearsum has recently deployed a buildbot change to retry interrupted jobs. During the experiment all interrupted jobs have been retried. Avoiding using spot instances for the second run if a job has been interrupted is still to be done in bug 925285.

Some facts.

  • 50 m1.medium instances have been used in the experiment.
  • Bid prices varied from 2.5¢ to 8¢.
  • 22 instances has been killed "by price" within first 20 minutes.
  • 20 instances survived 48 hours (until I killed them). Not only the most expensive once, but even the cheapest ones were in the list of survived instances.
  • ~2000 test jobs have been run.
  • Unknown amount of thoughts have been exchanged with catlee and tglek. :)

To be continued... Stay tuned!

Firefox Unit Tests on Ubuntu

It's been a while since we in RelEng started thinking about offloading builds and tests to AWS VMs. Last year we started building Firefox (Linux and Android), Thunderbird and Firefox OS on CentOS 6.2 based AWS VMs. Since then our wait times have been always above 95%, usually around 99%.

However, the story of tests' wait times is different. Since RelEng started building faster, added new products (especially Firefox OS) and more branches, the wait times went below 50%.

It took more than a month to get new platform up and running, properly pupptized and documented. I really liked using mind maps to organize chaotic thoughts, git-buildpackage to keep package building process under control and Upstart for its ability to chain services on system boot.

Chris posted a great overview of what we have now.

I would also like to say THANKS to Armen and Joel for their help with getting tests running on the new platforms, Callek and Dustin for their patience reviewing HUGE patches to get the platform puppetized.

Switching to Nikola

I've been using Wordpress as a blog engine for a while now, but I wasn't happy with it for some reasons:

  • Security. I've never been hit this issue, but some of my friends and colleagues had bad experience with recovering their blogs after sucessfull attacks.
  • It's not easy to sanely backup it. Since Wordpress is a database driven blog engine you have to dump the database, copy the files, etc. Using version control is almost impossible in this scenario.
  • No way to use it offline. That's the time when you want to write something! Of course you can write down things using your favourite text editor, then transfer the text and fix its appearance, but you can't see and use the whole blog.

Since I've been running Wordpress on my own server, I won't complain about other concerns people usually have: PHP, PHP versions, PHP modules, database, file permissions, running not under www-data user, etc.

I've been looking for something that eliminates most of the problems I listed above (the security issue can't be ever eliminated!). Since Python is the most used language at Mozilla RelEng, I've decided to pick up one of the static blog engines/generators listed at Python blog software. Nikola was one of the engines that I had been seeing in the news recently. Also having a blog engine named after your grand father isn't a bad idea. :)

So far I managed to import my old post from Wordpress. However, I'm going to reimport the old post manually, just to be sure that I have all my post in the same format. BTW, Nikola suports reStructuredText and Markdown what is really great.

I still need to figure out what would be the easiest way to put the blog under version control, teach VIM and myself to properly use Nikola.

P.S. I can hilight :)


import sys

def hello(name='world'):
    print "hello", name

if __name__ == "__main__":

How to use Visual Studio 2010 to build Firefox using Try

It took a while to start working on Bug 563317 (Install Visual C++ 2010 on build slaves) and get it working properly.

The first challenge was the OPSI installation procedure of Visual Studio 2010 which requires 3 reboots (!) to get installed properly. The final OPSI installation instructions don't seem too horrible.

The second challenge was awaiting me after I deployed the package on the try build slaves. Our start-buildbot.bati batch file was setting Visual Studio 2005 environment variables and it was not easy to reset those variables easily. After a bunch of try pushes the solution was pushed!

So, if you want to compile Firefox with Visual Studio 2010 using try server, add the following line to the end of your mozconfig:

. $topsrcdir/browser/config/mozconfigs/win32/vs2010-mozconfig

P.S. To have talos tests for debug builds running properly we still need to fix Bug 701700 and deploy VC++ 2010 debug CRT on talos slaves.

Harvesting releases

This month was a very interesting one.

I had a chance to be involved into 6 (!) release processes: 3.7a1, 4.0b1 (2 builds), 3.6.6, 3.6.7 and 3.5.11. All of these builds were unique (at least for me).


Last alpha with a different naming (MozillaDeveloperPreview). We introduced linux64 and macosx64 platforms in this release. Lucky me, the build environment for these platforms was carefully prepared and tested by Armen and Bear beforehand. During the preparation for this release, RelEng resolved some annoying bugs, which reduced manual intervention into the release process.


Not released yet. First branded version of Firefox 4 built for 5 platforms. Due to some discovered bugs we had to wait a day or two and produce build2.


Stable release with some fixes. Nothing unusual except the previous product version, 3.6.4 (not 3.6.5), and some fun with forcing L10N repacks. Despite of the fact that the time when we started the build wasn't ideal (Friday night, my Saturday morning) we released it in less than 24 hours.

It is the fastest release in RelEng history. It's pleasure being a part of history. :)


Not released yet. Available for the beta users. We had to run this release in parallel with 3.5.11. Needed some sed magic for snippets (thanks to Nick Thomas) to reduce server load and use to the mirrors for the beta channel updates. A lot of fun with producing Major Updates (MU) for Firefox 3.0.19 manually.


Not released yet old stable version. Available for the beta users. The build was done in parallel with 3.6.7. As a part of this build we also produced MUs for 3.5.x -> 3.6.7. MUs were done by release automation.

As a result, now I have much more clearer understanding of the release process, the release work flow and the release infrastructure.

Special thanks go to Ben Hearsum, Chris AtLee and Nick Thomas for being great supervisors!