A gem for your unique constraints

A few weeks ago, we wrote about how much of a pain it is to handle unique constraints correctly in Rails and showed some code to deal with it.

Good news! We just released a very tiny gem that adds this capability to your models called rescue-unique-constraint.  Now with one line of code, you can ask your model to rescue your unique constraint failures and turn them into regular model errors that can be safely rendered in your views.

You can download the gem on rubygems (gem install rescue_unique_constraint) and github

Here’s a quick example of what it looks like:

Class Thing < ActiveRecord::Base
  rescue_unique_constraint index: "my_unique_index", field: "somefield"
end

thing = Thing.create(somefield: "foo")
dupe = Thing.create(somefield: "foo")
=> false
thing.errors[:somefield] == "somefield has already been taken"
=> true

- @skwp

Database unique constraints in Rails

TLDR

ActiveRecord uniqueness validations are not good enough for distributed systems. Instead, database-level unique constraints must be used. When they are, custom logic must be implemented on the Rails side to trap these errors and report them as standard AR errors rather than exceptions.

How race conditions happen

Note that any system running more than one thread (even two unicorn workers) is susceptible to this.
Here’s how a race condition occurs:

1. Thread 1 checks for presence of a record; it is false
2. Thread 2 checks for presence of a record; it is also false
3. Thread 1 and Thread 2 write to the database.
4. Database now contains invalid rows which can’t be re-saved because they will fail Rails validations

How to fix this

First, add a unique index. Keep in mind that this index should be added
concurrently to avoid locking up the table if you’re in a high volume system.

As part of your migration, provide code to de-duplicate the existing entries, or the index creation will fail but leave the index partially created, forcing you to drop it prior to re-creation.

Indexes can contain WHERE clauses to reduce their scope. Below, we have an example where the index should not be checked when the rows have been deleted.

Example:

Second, add handling to the ActiveRecord model for capturing the database-level constraint failure. The best way we have so far is to override the `create_or_update` method which is called by ActiveRecord during `save` and`save!`, like this:

If your table has multiple unique constraints, you can add a clause to the case statement for each index.

Note the adding of the standard `taken` error to the appropriate field. This is the error Rails would normally add for a uniqueness failure. You can adjust/override the message in a translations file like this:

Caveats

If you have code that rescues ActiveRecord::RecordInvalid, you should realize that it’s possible for the record to be valid from the Rails standpoint but still fail database level constraints. When that happens, you will get back an ActiveRecord::RecordNotUnique or RecordNotSaved if you implemented the create_or_update override suggested above.

These are all subclasses of ActiveRecord::ActiveRecordError, which is what you should rescue if you for some reason are calling `save!` and wanting to rescue the result.

Till next time,
@skwp

Introducing migr8 a Concurrent Redis Migration Utility Written in Go

Here at Reverb, we’ve got quite a few places that we like to store our data. One of those places is Redis. We use Redis in quite a few ways including our job queues for Sidekiq and our analytics tracking for our internal service called Bump.

As a scrappy startup we thought to ourselves “oh one redis instance should be just fine forever and ever”…until it wasn’t. Earlier this year we started looking at our rate of growth in our redis keyspace and noticed we were quickly running out of memory. We knew something had to be done.

We came up with a plan: split Bump out into its own Redis instance. With this plan in mind we started looking to see if anybody else has solved this problem before us. We stumbled upon this script which was the initial inspiration for our tool Migr8. One of the first problems we noted about this script is its use of “keys *”.

Running keys * is a pretty bad idea if you’ve got a decent sized data set in Redis. This command is fine to run in development or staging but please heed our warnings(we’ve made the mistake) do not run “keys *” in production. Your Redis instance will lock while trying to process the command and will likely fail in the process. Redis locks on “keys *” because it is O(n) with the number of keys. If you have a significant number of keys, lock city awaits your arrival.

Luckily we ran the command on a slave so we just had to resync the slave with the master. You’ve been warned :)

So after a lot of toying around with different implementations in Ruby, we decided to give it a shot writing a tool in Golang. Our initial Ruby implementations were processing keys at a rate of 100 keys per second. Ruby is slow because of the GIL (Global Interpreter Lock). This makes Ruby a poor choice if we want fast / concurrent code. Go has native concurrency built into the languageThe Go implementation made our network card the bottleneck at 20k keys per second. Yeah, wow. Go is pretty fast.

At the time we had to move around 40 million keys from our main Redis instance to this new bump instance. If we were to stick with the Ruby implementation, it would have taken us 100 hours to migrate keys from one instance to the other. This is way too long.

Using the migr8 utility, we were able to complete the migration in 30~ minutes. Now that’s a much more acceptable number in terms of downtime.

Here’s some quick examples of how to use the Migr8 utility:

Using Go for this tool was a huge win over Ruby. So now we’d like to share the tool with you in hopes that it helps you move some Redis.

Til next time,

@atom_enger

@erikbenoist

@kylecrum

Github link to migr8

 

Communicating via Code

For us at Reverb, as we’ve been growing, being able to communicate effectively between teams and coders has been crucial to our ability to scale and create great software that our customers love. And as an organization that likes to stay small and agile, one of the best ways we can communicate with each other is in the code we write.

I recently got a chance to synthesize some of the ideas about communicating with code that we use here at Reverb during the Windy City Rails Conference.

Enjoy the talk an any feedback is welcome.

Kyle (@kylecrum)

From Handcrafted to the Assembly Line: Terraforming Reverb.com

At Reverb, we’re always thinking about ways to improve our workflow. Whether it’s in our application, our customer experience or our infrastructure, we know there’s always room for improvement.

One of the areas that we still have a lot of blackboxes and not-so-obvious knowledge is in the infrastructure that powers Reverb.com. When I started at Reverb in November 2014, all of our servers were built and maintained by hand. I knew that this approach was not feasible if we wanted to continue scaling our platform.

My first pass at revamping the infra included writing a chef cookbook for every service and using Chef to manage the infrastructure. This offered us a lot of benefits, such as repeatability and being able to document the actions required to configure our servers.

While we made some strides on the operating system and application level, we were still building what I like to call ‘artisanally crafted infrastructure’. Networks, subnets and load balancers were all set up by hand.

As the year went on, a lot of questions from the team arose: “Why does this server have X amount of ram? Why is X service in this subnet? Why does the load balancer listen on this port?” I knew that if I wanted to scale this platform that I had to shift the way I approached our infrastructure. I knew I had to document our infrastructure entirely in code.

Enter Terraform.

Terraform has given us the ability to create and spin up new environments in just minutes. Not only are the files self documenting the infrastructure, it’s incredibly useful to be able to tear down and spin up an entire environment by running one command: `terraform apply`.

Here’s an example of a Terraform plan that we use to set up one of our staging environments.

So far, we’ve rebuilt every one of our staging environments using a similar Terraform plan. This plan brings up our load balancer, database, elasticache instance and the instance that will run the reverb code. It even configures the DNS record pointing to the CNAME of the load balancer.

After the instance has been provisioned, Terraform even does us another solid: bootstraps the instance with the Chef server.

Terraform allows you to dynamically reference resources as they’re created.  You’ll notice in the load balancer resource, I’m referencing ${aws_instance.example.*.id} which is string interpolation. Basically I’m telling the load balancer, “I don’t care how many instances there are, just use them all!”.

Another great feature of Terraform is that it allows you to generate dependency graphs so you can easily describe your infrastructure to others in a visual format:

blawg
Lastly, one of the things I’m really loving about this approach is that creating a new environment to test some crazy change is as easy as typing:

cp -r old-env/ new-env/

in Vim: %s/old-env-name/new-env-name/g

and Finally: terraform apply

Next time you find yourself logging into AWS to make a handcrafted server sandwich with an applewood smoked load balancer, ask yourself “Is this something I could document and share with my team using a Terraform plan?”.

More than likely the answer will be yes. Not only are you spreading the knowledge that it took to create that piece of the Rube Goldberg machine, you’re saving yourself from hours of pain later on figuring out how you setup the damn thing months ago.

Like what we’re doing here and want to contribute to the best place to by music gear on the web? We’re hiring for a Jr Devops Engineer and more!

Til next time,

@atom_enger

Stay safe while using html_safe in Rails

Whether you’re a junior dev, product designer or senior level software engineer, it’s easy to fall on your face when using `html_safe` in Rails.

The thing about this method is: it’s terribly named. I mean really, it’s a horrible name. When you call a method on an object which transforms the original object, the method name should describe the transformation which is about to happen.

The html_safe method makes you think that the transformation you’re doing to the string is actually going to be safe. It can be safe. It can be very unsafe, too.

I’m going to go on record stating that we should call this method something more sane, like: html_beware. Why beware? Because as a code committer, you should be very aware of the string that you’re calling this method on. If the string has input that is user controlled of any kind, you should certainly not call “html_safe” on it. This method should make you think twice about what you’re doing, and by calling it safe, it certainly doesn’t make you think at all.

Let’s go over some code examples and explain exactly how html_safe works, and why it’s unsafe in certain contexts.

Now that we’ve looked at how to use html_safe properly, let’s look an example of how we at Reverb fell on our face. Not too long ago we shipped some code which allowed user-controlled input to be inserted into the DOM. This resulted in a stored XSS attack, which you can see here:

xss1

Here’s the bad code:

And here’s how we fixed it:

While there’s nothing inherently harmful about a javascript alert besides a minor annoyance, this attack vector illustrates that a user can inject any type of html tags into the DOM, including script tags. This could be especially disastrous if this vector was used to steal session cookies or login information. Thankfully we caught this error ourselves and it was not exploited.

Keep this in mind while you’re building your next awesome project and know exactly where the string comes from that you’re adding html_safe to. And even if you’re not building something new and have inherited an older codebase, consider grep’ing your codebase looking for string interpolations combined with html_safe:

.*(\+|\}).*html_safe

So while nothing is perfect, including this method name, in conclusion we have learned that it pays to be careful about what type of user data you’re working with. Here at Reverb, we believe in owning mistakes and fully understanding why they happened.

That being said, we also believe that nothing is perfect and mistakes will happen. If you believe you’ve found a bug on our platform, please securely and responsibly disclose it to us at security@reverb.com. We will work with you to confirm, close and patch the hole. We do offer bounty for critical bugs and swag for bugs with a lower risk profile.

Until next time, stay html_safe!

@atom_enger

@joekur

Rails and Ember Side by Side

This is not a blog post about embedding Ember CLI in your Rails app. Instead, it’s a post about how to get the two to live in harmony next to each other by separately deploying Rails and Ember, but making them feel like one app.

Our first attempt

Last week we launched our first foray into Ember – an admin facing utility that helps us organize, curate, and police content on our site. Our admin capability is developed primarily in Rails, but we wanted one page to be the Ember app.

Our first instinct was to look at ways to integrate our ember app directly into the Rails admin so it can live “inside” the page. We tried ember-cli-rails, a project that promised a lot of magic.

With a few lines of configuration, we could get Rails to compile our ember app and ship it along with our asset pipeline. Great! Ship it! But…disaster struck.

Problems with ember-cli-rails

1. It forces an ember dependency on all our Rails developers. They now need to know about npm, bower, and more in order to get their Rails app to even boot. This is sadness.

2. It bloats our Rails codebase by introducing another big hunk of code into it (an entire ember app).

3. The worst part: it turned our relatively snappy 2 minute Jenkins deploy into an 8 minute deploy (!). The issue appeared to be in the asset pipeline. Something was causing a drastic slowdown in compilation, right around the time of dealing with ember’s vendor assets (things like ember-data). Whether this is a bug in ember-cli-rails or simply the asset pipeline being the slow beast that it is, still remains to be seen.

We could probably get over #1 and #2 after some initial pain, but a 4 fold increase in deploy times was an unacceptable tradeoff for having Ember part of our Rails app.

Solutions?

When we ran ember-cli’s preferred compilation method (ember build), the build time was just fine. In fact on the same Jenkins box that took 4 minutes to concatenate assets in the Rails asset pipeline, the ember build took less than 20 seconds!

So we decided we were going to separate the two apps. But we still wanted it to feel like one app. Let’s get to work.

1. The Ember app should share a session with the Rails app

Because we didn’t want to have to deal with fancy things like OAuth or token based authentication against our API, we could simply share our session cookies with the Rails app if we ran on the same domain. So we decided we would serve our app on the same domain – https://reverb.com/app_goes_here. If it was at the same domain, it would share cookies and the Rails app would see it as “logged in”.

So the first thing we need to do is get it into a public directory on our existing web servers. We’ll talk about this in the deploy section below.

2. The Ember app should be environment aware so it can point to different backends

When you build your ember app, you can pass in an environment with “ember build –environment production”. To make our app aware of different endpoints, we added this into its config/environment.js:

https://gist.github.com/skwp/0bc41973a8952652f47d.js

3. What about CSRF?

Rails comes with some CSRF protection out of the box. The way it normally works is Rails will return your CSRF token as a tag in the body of the html you request. You would then submit forms back to Rails with that CSRF token. Ember does not pull html from Rails and all it’s requests are asynchronous. How to fix?

1. Make Rails return the CSRF token in a cookie for Ember to read

https://gist.github.com/skwp/130d6b18ee90c1c93799.js

2. Make Ember pull that cookie and set it on every out going request

https://gist.github.com/skwp/f0d09dc9adac07e597bf.js

Done.

4. How to deploy?

Ok, now the fun part. What is an ember app at it’s core? It’s just static html and javascript. We know how to deploy that, we just put it in the public dir of our Rails app right? Ok, so all we need to do is:

1. compile the ember app (npm/bower/ember build)
2. upload it to an s3 bucket
3. tell all our servers to download it

This is not particularly polished, but you get the idea:
https://gist.github.com/skwp/92c569cb622c47d3a1b5.js
Done.

5. Bonus: make it feel like part of the app

I’ll describe this one instead of giving you code. We wanted the ember app to have the same “layout” as the rest of our admin interface. Ajax to the rescue: just make a controller to render a partial and have ember pull it using jQuery.load into a div of your choice. Style it similar to your Rails app, and the illusion is complete.

One thing to note is that the Ember app is currently fully self contained in terms of assets. So in order to mimic the look and feel of our admin (which was based on Bootstrap), we had to pull Bootstrap into the Ember project. In the future, we may want to pull assets from Rails to avoid duplicating CSS. We have some ideas on how to do this using a controller to serve up the asset paths via an API but we’ll blog about that once we have a working prototype.

Yan Pritzker – @skwp

Organizing your Grape API endpoints

The following is taken from a Reverb Architecture Decision Document

TLDR

Grape endpoints (classes inheriting from Grape::API) are basically equivalent to Rails controllers. As such, they can contain many unrelated methods (index/show/delete/create). As they grow, the code becomes harder to maintain because helper methods usually only apply to one of the endpoints, similar to Rails controller private methods.

Decision

Grape endpoints should be delivered as independent classes for each action. For example, instead of:

    
# app/api/reverb/api/my_resource.rb
class MyResource < Grape::API
  get '/something' do
  end

  post '/something' do
  end
end

Create separate classes (and files) for each verb:


# app/api/reverb/api/my_resource/index.rb
module MyResource
  class Index < Grape::API
    get '/something' do
    end
  end
end

# app/api/reverb/api/my_resource/create.rb
module MyResource
  class Create < Grape::API
    post '/something' do
    end
  end
end

This allows us to define helper methods in each endpoint specific to that endpoint. Additionally, prefer creating model classes to one-off helper methods for endpoints when appropriate.

Positive Programming with Junior Devs

Hello, World. I’m Tam, and I am writing to you fresh from my third week on the engineering team at Reverb. I also just crossed into my second year as a professional programmer. Milestones! Growth! Vim!

I think of myself as an experienced novice. Thanks to my origins in a programming bootcamp, I know a lot of other people in my boat. It’s becoming more of a ship, actually — a sizable fleet, and we are crash-landing at your company in numbers never-before-seen! Prepare thyself accordingly:

Kindness
The first few days I showed up, different team members took me out to lunch. They all already knew my name. This made me feel welcome, which goes a long way in those strange first days.

Transparency
Within my first week, I received a document: “Expectations of Junior Developers.” This inspired my trust and confidence: they have invested time and thought into how they can smoothly onboard me. It also gave me a roadmap to judge my own progress. Building self-sufficiency feels good; provide people with tools that they may do so.

Patience
We share vim configurations here, and one of our key mappings is [,][t]. It maps to fuzzy file searching. Now, I have been typing since I was 10. I can type really quickly! But every comma I’ve ever typed has been followed by a whitespace.  Do you have any idea how many times I screwed up typing comma-t while my pair waited? We likely spent an entire collective day waiting on my fumbling fingers. I couldn’t even remember the keystrokes at first. Herein lies an opportunity for immense frustration on all sides. I urge you, experienced team member, to have patience. You are in a leadership position. If you get too frustrated too quickly, your junior stands no chance. Be patient: they are trying really hard, and it is exhausting.

We can teach you things
This week I unintentionally taught our CTO that you can split a git hunk. That was really exciting! There is a lot to know about software development. If you stay receptive, we may be able to teach you something in return.

The bottom line is, you have to be excited that we’re here. Every junior I know is thrilled, nervous, and doing everything they can to stay afloat. If you’ve screened them, you know they have potential. Try not to get in the way!

To the juniors of the world, don’t be afraid. You can do this. Find a supportive environment, keep friends close, and … Go!

@tamatojuice

Making inheritance less evil

Sometimes you come up against a problem that just seems to want to be solved with inheritance. In a lot of cases, you can get away from that approach by flipping the problem upside down and injecting dependencies. Sandi Metz’s new Railsconf talk Nothing is something does a really great job talking about this concept in a really fun way.

But if you have decided that inheritance is truly the right approach, here is something you can do to make your life just a little easier. It’s called DelegateClass.

Let’s quickly summarize a few reasons why inheritance is evil, especially in Ruby:
1. You inherit the entire API of your superclass including any future additions. As the superclass grows, so do the subclasses, making the system more tightly coupled as more users appear for your ever-growing API.
2. You can access the private methods of your superclass (yes, really). This means that refactorings of the superclass can easily break subclasses.
3. You can access the private instance variables of your superclass (yes, really). If you set what you think are your own instance variables, your superclass implementation can overwrite them.
4. You can override methods from the superclass and supply your own implementation. Some think this is a feature (see: template method pattern), but almost always this leads to pain as the superclass changes and affects every subclass implementation. You can invert this pattern by using the strategy pattern, which solves the same problem through composition.

Sometimes, though, there are legitimate situations where you want to inherit the entire interface to another object. A realistic example from Reverb is our view model hierarchy where various search views are all essentially “subclasses” of a parent view object that defines basics that every view uses, and then each view can define additional methods.

In these cases, one of the cleanest solutions is the DelegateClass pattern in Ruby. This is basically a decorator object that delegates all missing methods to the underlying class, just like inheritance would, but without giving you any access to the private methods of that class, or its instance variables.

Check out this example that illustrates both classical and DelegateClass-based inheritance:

– Yan Pritzker (@skwp)

iOS 9 and Charles Proxy

Using Charles Proxy to debug HTTPS requests on an iOS 9 simulator now has extra step to get it up and running. Application Transport Security (ATS) is the new technology in iOS 9 (and OS X v10.11) to enforce a set of best practices for all connections between an app and its backend. In practice, it blocks HTTP requests and some HTTPS requests that don’t meet a minimum standard, unless you provide an exception in your Info.plist. Currently, it also seems to be blocking requests when Charles is acting as a proxy for debugging purposes.

To continue to use Charles, we have to explicitly allow Insecure HTTP Loads in the Info.plist for requests on our domain to be readable in Charles. This covers (inherently) insecure HTTP connections and HTTPS connections that are not secure enough. The current beta of iOS 9 qualifies proxying through Charles as not secure, which is why we need the exception.

To do this, starting out on a fresh simulator, we’ll need to install the Charles Root Certificate

Screen Shot 2015-06-26 at 3.34.42 PM

We’ll still be seeing those SSL Handshake failures. Next, we need to add the exception in our Info.plist under a new NSAppTransportSecurity dict

NSAppTransportSecurity dict in info.plist

<key>NSAppTransportSecurity</key>
<dict>
  <key>NSExceptionDomains</key>
  <dict>
    <key>reverb.com</key>
    <dict>
      <key>NSIncludesSubdomains</key>
      <true/>
      <key>NSTemporaryExceptionAllowsInsecureHTTPLoads</key>
      <true/>
    </dict>
  </dict>
</dict>

However, as Apple has telegraphed, exceptions shouldn’t be used unless absolutely necessary. Ideally, we’d only include this exception on DEBUG builds. To accomplish this, we can use a Run Script Phase under Build Phases. Using PlistBuddy, a command line tool preinstalled on OS X for directly reading and modifying values inside a property list, we can edit the build’s copy of the Info.plist to include the exception only when we need to. Since it is changing the build’s copy, these exceptions will only be available on that Debug build, leaving your Info.plist and its own list of exceptions alone. This also means these changes won’t be checked into version control.

Go to TARGETS->App->Build Phases then click on the plus sign in the upper left and select ‘New Run Script Phase’. Paste in the script below, changing the domain values as needed.

# Add exception for Debug builds
if [ "${CONFIGURATION}" == "Debug" ]
then
# Remove exception existing builds
/usr/libexec/PlistBuddy -c "Delete :NSAppTransportSecurity:NSExceptionDomains:reverb.com" "${CONFIGURATION_BUILD_DIR}/${INFOPLIST_PATH}" 2>/dev/null
exitCode=$? #Supresses failure

/usr/libexec/PlistBuddy -c "Add :NSAppTransportSecurity:NSExceptionDomains:reverb.com dict" "${CONFIGURATION_BUILD_DIR}/${INFOPLIST_PATH}"
/usr/libexec/PlistBuddy -c "Add :NSAppTransportSecurity:NSExceptionDomains:reverb.com:NSIncludesSubdomains bool true" "${CONFIGURATION_BUILD_DIR}/${INFOPLIST_PATH}"
/usr/libexec/PlistBuddy -c "Add :NSAppTransportSecurity:NSExceptionDomains:reverb.com:NSTemporaryExceptionAllowsInsecureHTTPLoads bool true" "${CONFIGURATION_BUILD_DIR}/${INFOPLIST_PATH}"
fi

This workaround is valid as of iOS 9 Beta 6. Things could change in the future to make this unnecessary. I’ll attempt to keep this post updated if anything changes. Edit: This continues to be necessary through the iOS 9 release, so I think it is safe to say it is here to stay.

For more info on using NSAppTransportSecurity exceptions in general, Steven Peterson has an excellent blog post: http://ste.vn/2015/06/10/configuring-app-transport-security-ios-9-osx-10-11/

Kevin Donnelly
Senior Mobile Developer
@donnellyk

Disabling Animations in Espresso for Android Testing

When using Espresso for Android automated UI testing, it’s recommended that you disable system animations to prevent flakiness and ensure consistent, repeatable results. The Espresso docs provide a sample of how to disable animations programmatically, but leave out some important details. There is some discussion on that wiki page that provides good insight into solving the problems. Using those comments as a base, after lots of research and experimentation, we found a solution that works well for automatically disabling animations consistently for continuous integration tests.

Disable Animations Rule

First, we reworked the Espresso sample runner and turned it into a simple JUnit4 TestRule:

public class DisableAnimationsRule implements TestRule {
    private Method mSetAnimationScalesMethod;
    private Method mGetAnimationScalesMethod;
    private Object mWindowManagerObject;

    public DisableAnimationsRule() {
        try {
            Class<?> windowManagerStubClazz = Class.forName("android.view.IWindowManager$Stub");
            Method asInterface = windowManagerStubClazz.getDeclaredMethod("asInterface", IBinder.class);

            Class<?> serviceManagerClazz = Class.forName("android.os.ServiceManager");
            Method getService = serviceManagerClazz.getDeclaredMethod("getService", String.class);

            Class<?> windowManagerClazz = Class.forName("android.view.IWindowManager");

            mSetAnimationScalesMethod = windowManagerClazz.getDeclaredMethod("setAnimationScales", float[].class);
            mGetAnimationScalesMethod = windowManagerClazz.getDeclaredMethod("getAnimationScales");

            IBinder windowManagerBinder = (IBinder) getService.invoke(null, "window");
            mWindowManagerObject = asInterface.invoke(null, windowManagerBinder);
        }
        catch (Exception e) {
            throw new RuntimeException("Failed to access animation methods", e);
        }
    }

    @Override
    public Statement apply(final Statement statement, Description description) {
        return new Statement() {
            @Override
            public void evaluate() throws Throwable {
                setAnimationScaleFactors(0.0f);
                try { statement.evaluate(); }
                finally { setAnimationScaleFactors(1.0f); }
            }
        };
    }

    private void setAnimationScaleFactors(float scaleFactor) throws Exception {
        float[] scaleFactors = (float[]) mGetAnimationScalesMethod.invoke(mWindowManagerObject);
        Arrays.fill(scaleFactors, scaleFactor);
        mSetAnimationScalesMethod.invoke(mWindowManagerObject, scaleFactors);
    }
}

We use the same sample code to reflectively access the methods required to change the animation values, but instead of having to replace the default Instrumentation object to disable the animations, we just add a class rule to each test class that requires animations to be disabled (i.e., basically any UI instrumentation test) which disables animations for the duration of all tests in the class:

@RunWith(AndroidJUnit4.class)
public class AwesomeActivityTest {

@ClassRule public static DisableAnimationsRule disableAnimationsRule = new DisableAnimationsRule();

@Test
public void testActivityAwesomeness() throws Exception {
// Do your testing
}
}

Getting Permission

So the rule is set up and a test is ready to run, but you need permission to change these animation values so this will fail with a security exception:
java.lang.SecurityException: Requires SET_ANIMATION_SCALE permission

To prevent this, the app under test must both request and acquire this permission.

To request the permission, simply add as you normally would for any standard permission to your AndroidManifest.xml file. However, since this is only for testing, you don’t want to include this in the main manifest file. Instead, you can include it in only debug builds (against which tests will run) by adding another AndroidManifest.xml file in your project’s debug folder (“app/src/debug”) and adding the permission to that manifest. The build system will merge this into the main manifest file for debug builds when running your tests.

To acquire the permission, you need to manually grant the permission to your app. Since it’s a system level permission, just adding a uses-permission tag will not automatically grant you the permission like other standard permissions. To grant your app the permission, execute the “grant” adb shell command on the device you’re testing on after the app has been installed:

shell pm grant com.my.app.id android.permission.SET_ANIMATION_SCALE

Now you should be able to run your tests and disable animations for each test suite that needs them off and restore them when that suite completes. However, as soon as you uninstall the app your grant is gone and you have to manually grant the permission again for the next run.

That’s whack, yo – let’s automate this.

Automating Permission Grant

In the Espresso wiki discussions, a gist is provided that solves this issue. Since we set the permission for debug builds only, we don’t need the tasks that update the permissions in the manifest and just use the tasks that grant the permission (modified slightly). We found that you need to explicitly set the package ID since the build variable evaluates to the test package ID, not the id of the app under test.

task grantAnimationPermission(type: Exec, dependsOn: 'installDebug') {
    commandLine "adb shell pm grant com.my.app.id android.permission.SET_ANIMATION_SCALE".split(' ')
}
 
tasks.whenTaskAdded { task ->
    if (task.name.startsWith('connected')) {
        task.dependsOn grantAnimationPermission
    }
}

Now the permission will be automatically granted after the app is installed on the currently connected device. However, this presents yet another problem – this will fail if you have multiple devices attached since the adb command needs a target if there is more than one device available.

Targeting Multiple Devices

This gist provides a script that allows you to run a given adb command on each device available. If we save this in the app folder as “adb_all.sh” the task becomes:

task grantAnimationPermission(type: Exec, dependsOn: 'installDebug') {
    commandLine "./adb_all.sh shell pm grant com.my.app.id android.permission.SET_ANIMATION_SCALE".split(' ')
}

And there we go. Many hoops to jump through but with all of that set up you can now connect multiple devices and / or emulators and just run “./gradlew cC”. With this set up, Gradle will automatically build your app, deploy it to each device, grant it the SET_ANIMATION_SCALE permission, and run all of your tests with animations disabled as required.

Is SCrypt slowing down your tests?

If you’re using SCrypt for hashing passwords, make sure you’re not using it in test with fabrication-based techniques. This could result in unnecessarily slow fabrication times.

Here’s how you can set authlogic to use a different provider in test mode, so that your tests are faster:


acts_as_authentic do |c|
if Rails.env.test?
c.crypto_provider = Authlogic::CryptoProviders::MD5
else
c.crypto_provider = Authlogic::CryptoProviders::SCrypt
end
end

@skwp

Documenting architecture decisions, the Reverb way

Ever make a decision in your codebase and then come back 6 months later and have no recollection of why the code is the way it is? I certainly have.

Enter the ADR – the Architecture Decision Record. For this idea, we traveled back in time to 2011 to find this blog post from  Relevance,inc. I really loved the idea of storing decision docs right in the codebase, as we all know that there are lies, damned lies, and documentation and thought that keeping things like this in the code base might help prevent documentation drift.

Here are some of the key takeaways to make architecture decision docs really useful:

  1. Store ADR docs right in your codebase. We put ours in doc/architecture. Use markdown so they read nicely on github.
  2. Document decisions, not the state of things. Decisions inherently don’t need to be kept up to date. We say why we did something, and 6 months from now, our system might look different, but we now have a record of what we used to think and why we thought it.
  3. Include a TLDR section at the top that explains the decision in a few concise sections.
  4. Include a More Details section that gives more depth to the explanation.
  5. Include a Tags section in your ADR doc. These should be things like class names, function names, business concepts, etc. That way when you’re in your code and you’re grepping for a particular thing, you’ll “stumble upon” the doc.
  6. If appropriate, link to the ADR in code comments in the area where the ADR applies. If you link to the full path like “doc/architecture/ADR5-timezones.md” then vim’s ‘gf’ shortcut can jump you right to the doc from the code.Bonus: blog it publicly. We have blogged one of our ADRs about timezones and we’ll have another one on Grape coming out soon.

Stay tuned,
Yan Pritzker, CTO
@skwp