Positive Programming with Junior Devs

Hello, World. I’m Tam, and I am writing to you fresh from my third week on the engineering team at Reverb. I also just crossed into my second year as a professional programmer. Milestones! Growth! Vim!

I think of myself as an experienced novice. Thanks to my origins in a programming bootcamp, I know a lot of other people in my boat. It’s becoming more of a ship, actually — a sizable fleet, and we are crash-landing at your company in numbers never-before-seen! Prepare thyself accordingly:

Kindness
The first few days I showed up, different team members took me out to lunch. They all already knew my name. This made me feel welcome, which goes a long way in those strange first days.

Transparency
Within my first week, I received a document: “Expectations of Junior Developers.” This inspired my trust and confidence: they have invested time and thought into how they can smoothly onboard me. It also gave me a roadmap to judge my own progress. Building self-sufficiency feels good; provide people with tools that they may do so.

Patience
We share vim configurations here, and one of our key mappings is [,][t]. It maps to fuzzy file searching. Now, I have been typing since I was 10. I can type really quickly! But every comma I’ve ever typed has been followed by a whitespace.  Do you have any idea how many times I screwed up typing comma-t while my pair waited? We likely spent an entire collective day waiting on my fumbling fingers. I couldn’t even remember the keystrokes at first. Herein lies an opportunity for immense frustration on all sides. I urge you, experienced team member, to have patience. You are in a leadership position. If you get too frustrated too quickly, your junior stands no chance. Be patient: they are trying really hard, and it is exhausting.

We can teach you things
This week I unintentionally taught our CTO that you can split a git hunk. That was really exciting! There is a lot to know about software development. If you stay receptive, we may be able to teach you something in return.

The bottom line is, you have to be excited that we’re here. Every junior I know is thrilled, nervous, and doing everything they can to stay afloat. If you’ve screened them, you know they have potential. Try not to get in the way!

To the juniors of the world, don’t be afraid. You can do this. Find a supportive environment, keep friends close, and … Go!

@tamatojuice

Making inheritance less evil

Sometimes you come up against a problem that just seems to want to be solved with inheritance. In a lot of cases, you can get away from that approach by flipping the problem upside down and injecting dependencies. Sandi Metz’s new Railsconf talk Nothing is something does a really great job talking about this concept in a really fun way.

But if you have decided that inheritance is truly the right approach, here is something you can do to make your life just a little easier. It’s called DelegateClass.

Let’s quickly summarize a few reasons why inheritance is evil, especially in Ruby:
1. You inherit the entire API of your superclass including any future additions. As the superclass grows, so do the subclasses, making the system more tightly coupled as more users appear for your ever-growing API.
2. You can access the private methods of your superclass (yes, really). This means that refactorings of the superclass can easily break subclasses.
3. You can access the private instance variables of your superclass (yes, really). If you set what you think are your own instance variables, your superclass implementation can overwrite them.
4. You can override methods from the superclass and supply your own implementation. Some think this is a feature (see: template method pattern), but almost always this leads to pain as the superclass changes and affects every subclass implementation. You can invert this pattern by using the strategy pattern, which solves the same problem through composition.

Sometimes, though, there are legitimate situations where you want to inherit the entire interface to another object. A realistic example from Reverb is our view model hierarchy where various search views are all essentially “subclasses” of a parent view object that defines basics that every view uses, and then each view can define additional methods.

In these cases, one of the cleanest solutions is the DelegateClass pattern in Ruby. This is basically a decorator object that delegates all missing methods to the underlying class, just like inheritance would, but without giving you any access to the private methods of that class, or its instance variables.

Check out this example that illustrates both classical and DelegateClass-based inheritance:

– Yan Pritzker (@skwp)

iOS 9 and Charles Proxy

Using Charles Proxy to debug HTTPS requests on an iOS 9 simulator now has extra step to get it up and running. Application Transport Security (ATS) is the new technology in iOS 9 (and OS X v10.11) to enforce a set of best practices for all connections between an app and its backend. In practice, it blocks HTTP requests and some HTTPS requests that don’t meet a minimum standard, unless you provide an exception in your Info.plist. Currently, it also seems to be blocking requests when Charles is acting as a proxy for debugging purposes.

To continue to use Charles, we have to explicitly allow Insecure HTTP Loads in the Info.plist for requests on our domain to be readable in Charles. This covers (inherently) insecure HTTP connections and HTTPS connections that are not secure enough. The current beta of iOS 9 qualifies proxying through Charles as not secure, which is why we need the exception.

To do this, starting out on a fresh simulator, we’ll need to install the Charles Root Certificate

Screen Shot 2015-06-26 at 3.34.42 PM

We’ll still be seeing those SSL Handshake failures. Next, we need to add the exception in our Info.plist under a new NSAppTransportSecurity dict

NSAppTransportSecurity dict in info.plist

<key>NSAppTransportSecurity</key>
<dict>
  <key>NSExceptionDomains</key>
  <dict>
    <key>reverb.com</key>
    <dict>
      <key>NSIncludesSubdomains</key>
      <true/>
      <key>NSTemporaryExceptionAllowsInsecureHTTPLoads</key>
      <true/>
    </dict>
  </dict>
</dict>

However, as Apple has telegraphed, exceptions shouldn’t be used unless absolutely necessary. Ideally, we’d only include this exception on DEBUG builds. To accomplish this, we can use a Run Script Phase under Build Phases. Using PlistBuddy, a command line tool preinstalled on OS X for directly reading and modifying values inside a property list, we can edit the app’s Info.plist to include the exception only when we need to.

Go to TARGETS->App->Build Phases then click on the plus sign in the upper left and select ‘New Run Script Phase’. Paste in the script below, changing the domain values as needed.

# Remove exception for all builds
/usr/libexec/PlistBuddy -c "Delete :NSAppTransportSecurity" ${INFOPLIST_FILE} 2>/dev/null
exitCode=$? #Supresses failure if key doesn't exist

# Add exception for Debug builds
if [ "${CONFIGURATION}" == "Debug" ]
then
/usr/libexec/PlistBuddy -c "Add :NSAppTransportSecurity dict" ${INFOPLIST_FILE}
/usr/libexec/PlistBuddy -c "Add :NSAppTransportSecurity:NSExceptionDomains dict" ${INFOPLIST_FILE}
/usr/libexec/PlistBuddy -c "Add :NSAppTransportSecurity:NSExceptionDomains:reverb.com dict" ${INFOPLIST_FILE}
/usr/libexec/PlistBuddy -c "Add :NSAppTransportSecurity:NSExceptionDomains:reverb.com:NSIncludesSubdomains bool true" ${INFOPLIST_FILE}
/usr/libexec/PlistBuddy -c "Add :NSAppTransportSecurity:NSExceptionDomains:reverb.com:NSTemporaryExceptionAllowsInsecureHTTPLoads bool true" ${INFOPLIST_FILE}
fi

If you have existing ATS exceptions that you don’t want to overwrite, this can be edited to simply add a specific domain to the existing dictionary:

# Remove exception for all builds
/usr/libexec/PlistBuddy -c "Delete :NSAppTransportSecurity:NSExceptionDomains:reverb.com" ${INFOPLIST_FILE} 2>/dev/null
exitCode=$? #Supresses failure

# Add exception for Debug builds
if [ "${CONFIGURATION}" == "Debug" ]
then
/usr/libexec/PlistBuddy -c "Add :NSAppTransportSecurity:NSExceptionDomains:reverb.com dict" ${INFOPLIST_FILE}
/usr/libexec/PlistBuddy -c "Add :NSAppTransportSecurity:NSExceptionDomains:reverb.com:NSIncludesSubdomains bool true" ${INFOPLIST_FILE}
/usr/libexec/PlistBuddy -c "Add :NSAppTransportSecurity:NSExceptionDomains:reverb.com:NSTemporaryExceptionAllowsInsecureHTTPLoads bool true" ${INFOPLIST_FILE}
fi

This workaround is valid as of iOS 9 Beta 3. Things could change in the future to make this unnecessary. I’ll attempt to keep this post updated if anything changes.

For more info on using NSAppTransportSecurity exceptions in general, Steven Peterson has an excellent blog post: http://ste.vn/2015/06/10/configuring-app-transport-security-ios-9-osx-10-11/

Kevin Donnelly
Senior Mobile Developer
@donnellyk

Disabling Animations in Espresso for Android Testing

When using Espresso for Android automated UI testing, it’s recommended that you disable system animations to prevent flakiness and ensure consistent, repeatable results. The Espresso docs provide a sample of how to disable animations programmatically, but leave out some important details. There is some discussion on that wiki page that provides good insight into solving the problems. Using those comments as a base, after lots of research and experimentation, we found a solution that works well for automatically disabling animations consistently for continuous integration tests.

Disable Animations Rule

First, we reworked the Espresso sample runner and turned it into a simple JUnit4 TestRule:

public class DisableAnimationsRule implements TestRule {
    private Method mSetAnimationScalesMethod;
    private Method mGetAnimationScalesMethod;
    private Object mWindowManagerObject;

    public DisableAnimationsRule() {
        try {
            Class<?> windowManagerStubClazz = Class.forName("android.view.IWindowManager$Stub");
            Method asInterface = windowManagerStubClazz.getDeclaredMethod("asInterface", IBinder.class);

            Class<?> serviceManagerClazz = Class.forName("android.os.ServiceManager");
            Method getService = serviceManagerClazz.getDeclaredMethod("getService", String.class);

            Class<?> windowManagerClazz = Class.forName("android.view.IWindowManager");

            mSetAnimationScalesMethod = windowManagerClazz.getDeclaredMethod("setAnimationScales", float[].class);
            mGetAnimationScalesMethod = windowManagerClazz.getDeclaredMethod("getAnimationScales");

            IBinder windowManagerBinder = (IBinder) getService.invoke(null, "window");
            mWindowManagerObject = asInterface.invoke(null, windowManagerBinder);
        }
        catch (Exception e) {
            throw new RuntimeException("Failed to access animation methods", e);
        }
    }

    @Override
    public Statement apply(final Statement statement, Description description) {
        return new Statement() {
            @Override
            public void evaluate() throws Throwable {
                setAnimationScaleFactors(0.0f);
                try { statement.evaluate(); }
                finally { setAnimationScaleFactors(1.0f); }
            }
        };
    }

    private void setAnimationScaleFactors(float scaleFactor) throws Exception {
        float[] scaleFactors = (float[]) mGetAnimationScalesMethod.invoke(mWindowManagerObject);
        Arrays.fill(scaleFactors, scaleFactor);
        mSetAnimationScalesMethod.invoke(mWindowManagerObject, scaleFactors);
    }
}

We use the same sample code to reflectively access the methods required to change the animation values, but instead of having to replace the default Instrumentation object to disable the animations, we just add a class rule to each test class that requires animations to be disabled (i.e., basically any UI instrumentation test) which disables animations for the duration of all tests in the class:

@RunWith(AndroidJUnit4.class)
public class AwesomeActivityTest {

@ClassRule public static DisableAnimationsRule disableAnimationsRule = new DisableAnimationsRule();

@Test
public void testActivityAwesomeness() throws Exception {
// Do your testing
}
}

Getting Permission

So the rule is set up and a test is ready to run, but you need permission to change these animation values so this will fail with a security exception:
java.lang.SecurityException: Requires SET_ANIMATION_SCALE permission

To prevent this, the app under test must both request and acquire this permission.

To request the permission, simply add as you normally would for any standard permission to your AndroidManifest.xml file. However, since this is only for testing, you don’t want to include this in the main manifest file. Instead, you can include it in only debug builds (against which tests will run) by adding another AndroidManifest.xml file in your project’s debug folder (“app/src/debug”) and adding the permission to that manifest. The build system will merge this into the main manifest file for debug builds when running your tests.

To acquire the permission, you need to manually grant the permission to your app. Since it’s a system level permission, just adding a uses-permission tag will not automatically grant you the permission like other standard permissions. To grant your app the permission, execute the “grant” adb shell command on the device you’re testing on after the app has been installed:

shell pm grant com.my.app.id android.permission.SET_ANIMATION_SCALE

Now you should be able to run your tests and disable animations for each test suite that needs them off and restore them when that suite completes. However, as soon as you uninstall the app your grant is gone and you have to manually grant the permission again for the next run.

That’s whack, yo – let’s automate this.

Automating Permission Grant

In the Espresso wiki discussions, a gist is provided that solves this issue. Since we set the permission for debug builds only, we don’t need the tasks that update the permissions in the manifest and just use the tasks that grant the permission (modified slightly). We found that you need to explicitly set the package ID since the build variable evaluates to the test package ID, not the id of the app under test.

task grantAnimationPermission(type: Exec, dependsOn: 'installDebug') {
    commandLine "adb shell pm grant com.my.app.id android.permission.SET_ANIMATION_SCALE".split(' ')
}
 
tasks.whenTaskAdded { task ->
    if (task.name.startsWith('connected')) {
        task.dependsOn grantAnimationPermission
    }
}

Now the permission will be automatically granted after the app is installed on the currently connected device. However, this presents yet another problem – this will fail if you have multiple devices attached since the adb command needs a target if there is more than one device available.

Targeting Multiple Devices

This gist provides a script that allows you to run a given adb command on each device available. If we save this in the app folder as “adb_all.sh” the task becomes:

task grantAnimationPermission(type: Exec, dependsOn: 'installDebug') {
    commandLine "./adb_all.sh shell pm grant com.my.app.id android.permission.SET_ANIMATION_SCALE".split(' ')
}

And there we go. Many hoops to jump through but with all of that set up you can now connect multiple devices and / or emulators and just run “./gradlew cC”. With this set up, Gradle will automatically build your app, deploy it to each device, grant it the SET_ANIMATION_SCALE permission, and run all of your tests with animations disabled as required.

Is SCrypt slowing down your tests?

If you’re using SCrypt for hashing passwords, make sure you’re not using it in test with fabrication-based techniques. This could result in unnecessarily slow fabrication times.

Here’s how you can set authlogic to use a different provider in test mode, so that your tests are faster:


acts_as_authentic do |c|
if Rails.env.test?
c.crypto_provider = Authlogic::CryptoProviders::MD5
else
c.crypto_provider = Authlogic::CryptoProviders::SCrypt
end
end

@skwp

Documenting architecture decisions, the Reverb way

Ever make a decision in your codebase and then come back 6 months later and have no recollection of why the code is the way it is? I certainly have.

Enter the ADR – the Architecture Decision Record. For this idea, we traveled back in time to 2011 to find this blog post from  Relevance,inc. I really loved the idea of storing decision docs right in the codebase, as we all know that there are lies, damned lies, and documentation and thought that keeping things like this in the code base might help prevent documentation drift.

Here are some of the key takeaways to make architecture decision docs really useful:

  1. Store ADR docs right in your codebase. We put ours in doc/architecture. Use markdown so they read nicely on github.
  2. Document decisions, not the state of things. Decisions inherently don’t need to be kept up to date. We say why we did something, and 6 months from now, our system might look different, but we now have a record of what we used to think and why we thought it.
  3. Include a TLDR section at the top that explains the decision in a few concise sections.
  4. Include a More Details section that gives more depth to the explanation.
  5. Include a Tags section in your ADR doc. These should be things like class names, function names, business concepts, etc. That way when you’re in your code and you’re grepping for a particular thing, you’ll “stumble upon” the doc.
  6. If appropriate, link to the ADR in code comments in the area where the ADR applies. If you link to the full path like “doc/architecture/ADR5-timezones.md” then vim’s ‘gf’ shortcut can jump you right to the doc from the code.Bonus: blog it publicly. We have blogged one of our ADRs about timezones and we’ll have another one on Grape coming out soon.

Stay tuned,
Yan Pritzker, CTO
@skwp

PayPal Express Checkout Broken – Use Webscr Fallback

TLDR:
If your paypal checkout suddenly started to redirect to HP.com and teespring.com, the fix is to replace your checkout url from https://paypal.com/checkoutnow with https://www.paypal.com/webscr?cmd=_express-checkout

Details:
This morning we were alerted to users experiencing bizarre problems in our checkout. After clicking check out with paypal, they were redirected to a checkout page that was for HP.com or teespring.com. This page would be prefilled with a static dollar amount unrelated to what we were sending, and sometimes was prefilled with an email address of someone who was not our customer.

After checking with these websites, we found out that they too were experiencing checkout issues. We now suspect that this was affecting all websites using the new express checkout base url (https://paypal.com/checkoutnow
). In fact, if you just go to that url you would see the strangely cached HP or teespring checkout, even in an incognito window.

Here’s a screenshot:

We immediately rolled out a change to our checkout to disable the Paypal button, as this looked very fishy to our users, even though it was not our problem.

There was no immediate response from paypal or HP on twitter, though HP confirmed in their site support that they were having checkout issues as well. Teespring confirmed this as well.

We then discovered that the original express checkout url (https://www.paypal.com/webscr?cmd=_express-checkout
) works just fine. We were able to replace our base url quickly (thanks Chef!) and roll out a fix to our users.

Strange but true: there is one other express checkout url that works, and that is (https://paypal.com/checkoutnow/2
) – that is not a typo, the “/2″ at the end actually forces the checkout into some special mode that is completely functional. However, we could not find any evidence for this URL being officially supported except for some stackoverflow posts, so until we hear more from Paypal, we’ll be using the old “webscr” url.

therubyracer and libv8 yosemite gem/bundler problems – the simplest fix of all

After doing a clean install on yosemite, some of our developers had issues compiling therubyracer/libv8. After scouring the internet, we found many awkward and horrible sounding workarounds ranging from downgrading versions to obscure command line compilation switches or compiler changes, none of which really worked.

So after the obligatory cut-n-paste-from-stackoverflow fest, we asked the underlying question: what is therubyracer/libv8, and why do we even need it?

Well, it turns out it’s used only for asset compilation by execjs. And what’s more, you don’t even need it on a mac. It turns out on MacOS, execjs can use Apple JavaScriptCore which comes as part of the system install.

So why would we want therubyracer in our Gemfile? Its entire purpose of existence is really asset compilation on Ubuntu. What else can serve the same job with zero pain? Nodejs.

So here’s the simplest possible fix:

  1. Remove therubyracer and libv8 from your Gemfile entirely.
  2. If compiling assets on ubuntu servers (e.g Jenkins), apt-get install nodejs
  3. Profit.

Enjoy the rest of your day and the countless hours saved trying to get bundler on osx to do the right thing.

till next time,

Yan Pritzker
CTO, Reverb.com
@skwp

Fun with setInterval and Turbolinks

Turbolinks is a fantastic tool that speeds up page loads by only loading the body of each page using ajax. The downside of using it, though, is that our javascript sins are no longer erased by a full page load when a user clicks on a link.

We ran into this recently while using setInterval to periodically poll the server for new information. Our initial code looked something like this:

Since the page was never reloaded, once the polling started it took place even while the user was on a different page, which resulted in a lot of unnecessary ajax requests. In addition, each time the page with polling was visited a new setInterval process was created. Obviously this could get quite out of hand.

To fix this, we knew we would need a clearInterval call of some sort. We tried this:

This seems to work except it instantly cleared any intervals after they were set. We experimented with binding to different events, but it was hard to predict the actual order of code execution regardless of the order of the lines of code or the order of the events we bound to. The only way to be absolutely sure that setInterval would be started and then a clearInterval would be bound to page:change was to unbind the clearInterval event when it fires.

Now our setInterval only runs on the desired page. If you have a lot of different setInterval events in your turbolinks app, you could easily create a function that handles all of this for you:

How not to fail at Timezones in Rails

We recently discovered a test that was failing only at night. Of course this set off all kinds of alarms in my head – we must be screwing something up with timezones! Time for an audit. Let’s review how Rails deals with timezones.

Water, fire, air, and dirt, f**king timezones, how do they work?

The following is taken almost verbatim from an Architecture Decision Record doc in our codebase.

In Rails, only Time.zone methods like Time.zone.now and Time.zone.parse are in the Rails configured timezone. Everything else like DateTime.now and Time.now is in system time.

TLDR

What Rails does:

  • In Rails, Time.zone refers to the Rails (not system) timezone, set by config.time_zone (typically in application.rb)
  • DateTime and Time are not otherwise Rails aware, therefore, DateTime.now and Time.now both return times in the system timezone.
  • 1.month.ago and similar methods use the Rails timezone, but DateTime.now.last_month uses the system timezone
  • When retrieving things from ActiveRecord, they will be timezoned to the Rails timezone, so Product.first.created_at will give you a time in the Rails timezone (not the system timezone)

What we do:

  • Our production servers run in UTC
  • Our dev machines typically run in CST/CDT
  • We are currently configuring our Rails timezone as Central Time but almost never using Time.zone to use it
  • We are also using the Temporal gem which sets Time.zone to the user who is browsing’s time using javascript

Decisions

All systems should use UTC

Internally, all times should be stored in UTC (database, redis, elasticsearch). This will be the case because these systems are running in UTC.

If users submit an absolute date or time in a form

The form object or controller must parse that time using Time.zone.parse

If we display an absolute time to a user

  • First, try to avoid this by displaying relative times like “2 days ago”
  • If we must display an absolute time to the user, it should be shown using Time.zone which should generally work if it came from ActiveRecord automatically.
  • If you want to display the current time or a specific time, you must use Time.zone.now or Time.zone.parse(“Your Specific Time”)

We will continue to use Central as our default timezone

We will continue to default the Rails Timezone to central so that if we can’t guess the user’s Timezone using Temporal, then absolute displayed times will be in central.

In tests

In tests, do not mix Rails timezone methods like “1.month.ago” with system timezone methods like “DateTime.now.last_month”

Links

Yan Pritzker
CTO, Reverb.com
@skwp

Finding where a method is defined in Ruby

Ruby can sometimes look like magic, but, like all magic, if you look hard enough, you can see the sleight of hand.  With method_missing, dynamically defined methods, and being able to extend objects at runtime, sometimes it looks like voodoo as to where the method being called is actually defined. You could do a bunch of reverse engineering, or you could just ask ruby.

Conditional Validations with Rails

Adding validations to your ActiveRecord models always starts out pretty simple. But as your app grows, your business rules compound in complexity and your data can become untidy. Oftentimes we require that certain models are validated differently under different contexts. Here are some examples that I’ll talk about:

  1. Validating a field conditionally upon the value of a different field.
    eg: Listings require a photo only if they are published.
  2. Validating a field conditionally upon who is saving the record.
    eg: Admins can make a shop name anything, while normal users must conform to a specific format.
  3. Validating a field only on a specific form.
    eg: Users signing up through your normal sign-up form must accept a terms of service.
  4. Adding a new field to an existing record that can’t be backfilled.
    eg: You want new users to give you their phone number, but you don’t have phone numbers for existing users.

What does Rails give us?

Rails gives us a few tools as part of the validation API, outlined here in their RailsGuides. Here’s how we might use the ‘if’ option with the validates method to accomplish use case #1 above:

We can abuse this further by adding some custom methods on our model. See how we might accomplish our special admin validations (#2 above):

Rails also gives us the ‘on’ option that allows us to specify validations that should happen only on create or update. With it, we can create a functional solution to #3 above:

This isn’t a perfect solution though, because now if we want to create users elsewhere (for example, from an admin screen), we would still need to pass this attribute. We could utilize our “edit_as_admin!” method as in the previous section again, but since this validation really only applies to one specific workflow in our app – the new user signup, I think it ideally calls for a different approach. Enter the “form object”.

Form objects

Form objects are table-less models – they quack a lot like ActiveRecord, but they don’t actually save to a database. They represent the state and validation of forms themselves. Given the example above, I might consider refactoring my signup action to use a UserSignupForm. With Rails 4, we now have a single mixin – ActiveModel::Model – that makes this very straightforward. Here’s how you might implement such an object, solving our TOS validation:

Besides being a good place to have use-case specific validations, form objects gives us a lot of other benefits. We can easily handle non-persisted attributes, save multiple objects with more flexibility than “accepts_nested_attributes_for”, and even represent errors external to data validations (from talking to 3rd parties, for example) in the same way we represent our validation errors.

Note: Rails does have an ‘acceptance’ validation especially for this use-case. Use it if it works for you, but the idea above still stands.

Making use of modules

One issue that may arise from breaking up all your separate use-cases into form objects is introducing duplication in your validations. If you can imagine having both a UserSignupForm and a UserEditForm (and maybe even an AdminUserEditForm), duplicating validations across those forms quickly becomes a pain. Now of course if it makes sense, you can keep some shared validations in the model itself. If you can’t, you can still clean things up by grouping validations into sensible modules.

Here’s how you might extract validations into a reusable mixin:

As always, use the right tool for the job

Obviously none of these techniques are a silver bullet, and you need to decide what makes the most sense for a given case. Hopefully if you’ve learned a new technique from this post, you’ll have an alternative to stuffing every piece of validation into your model.

With that in mind, let’s think about how we would solve situation #4 – adding a new attribute without a backfill. Well we could put it in the model with a condition:

But if you want to require the phone number only on a signup form, or possibly on the signup and account edit forms, then it might make sense to put the validation on a form object (and maybe use a mixin).

Favor a technique not covered in the post? I’d love to hear about in the comments below. Happy validating!

Project Planning for Self-Managing Developers

As a first-time start-up employee, I originally struggled to adapt to the high level of responsibility I have here at Reverb.com. One of the responsibilities I have is planning my own projects, from preparation through delivery. At Reverb.com, developers are project managers.

Planning a project is super important. When you skimp on planning, you open yourself up to rapidly changing requirements and redundant work which is inefficient and frustrating.

Luckily, with a little effort, you can prepare adequately for a project of any size. There are three important steps to planning a project: understanding, estimation, and communication.

Understanding

Understanding is essentially research. In order to efficiently solve a problem, you need to understand the problem inside and out. Step 1? Identify your customers and stakeholders. Basically figure out with whom you’ll need to communicate. Who are you working for? Who will be affected by what you’re doing?

Once you have a handle on the stakeholders, source the requirements. Put yourself in the shoes of the customer experiencing the issue or better yet, talk directly to customers to get some first-hand accounts. The clearer your understanding of why you’re executing a project, the less likely you are to encounter changing requirements and pivots.

Now reinforce your knowledge of the requirements by learning about the current process. Diagram the flow. Make sure you technically understand the backend of the issue. Is the current system easily changeable? Do you need to refactor existing classes? This will give you context for deciding between possible solutions.

Finally, break down the requirements into small parts. Not only does this make the problem easier to digest and reason about, it also forces you to ensure your sourced requirements are specific. Ask questions. Get answers.

Estimation

The second phase of planning is making estimations. Creating a timeline, however rough, helps keep you on track and efficient. This is especially important at lean companies that expect high levels of transparency and productivity. It’s also much simpler if you’ve followed the above advice and broken down your problem into smaller chunks.

When you have estimations for your requirements, make sure to timebox yourself when you execute the project. This means, when working on a requirement, restrict yourself to the length of time you estimated it would take to finish that requirement. If you’re adding a comment system to a blog and you think it’ll take you 6 hours, make sure you stop after 6 hours. Re-evaluate your progress and the requirements. If your estimate was off, think about why and alter your estimates accordingly.

Estimation is more art than science, and it can sometimes seem like more of a struggle than it’s worth, but the benefit of approximating your efforts is that it gives you a frame of reference from which you can reason about project completion. Without some form of estimation, communicating your progress is impossible.

Communication

The last step of planning a project is communication. Everyone should be on the same page as much as possible. Requirements change? Tell your stakeholders. Design tweaked? Share with your customers. Blocked technically? Keep your boss posted. You should be communicating with all previously identified customers and stakeholders on a regular basis. Share your estimates, timeline, and the project requirements (which are basically the reasoning for your estimates) before starting work on the project (and as they change). Speak up when you’re blocked for any reason. Even if it can’t be helped immediately, it will make sure no one is surprised or under a false impression.

One important point that I struggle with occasionally: you are a stakeholder too. Communicate with yourself! Slow down and re-evaluate the project regularly. Rubber ducking is your friend.

I hope this was helpful! If you have any comments or tips, please leave a note! Thanks for reading.

Joe Levering
@JoeLevering
Joe@reverb.com

Shopify Rate Limits, Sidekiq, and You

We’re just about to launch our Shopify App, which allows Shopify shops on Reverb to sync their inventory from Shopify to Reverb.

Working with Shopify at any decent scale requires respecting their rate limits, which makes API access rather tricky. Shopify allows you to make 2 reqs/second (per shop), which is burstable to 40req/second using a “leaky bucket” algorithm which means that bursting to higher rates fills up your bucket and you start getting 429 Errors telling you to slow your roll.

In their docs they recommend a rather “interesting” way of measuring your own rate and trying to preemptively rate limit yourself. Even though the code looks slightly unpleasant, it should in theory work, for some value of “work”. However, you quickly run into the major caveat is that it’s only good for single threaded programs…which, if you’re building a platform that’s going to handle more than one shopify shop with thousands of SKUs, quickly doesn’t scale.

At scale, you’re going to want to use something like Sidekiq. We’re using Sidekiq Pro which has the advantage of being able to track a bulk sync job as a batch, so that we can split it up into pieces, and churn through them as quickly as shopify will allow us.

In order to handle rate limiting, we are using two approaches. The first one freedom-patches ActiveResource::Connection, used by the Shopify API gem in order to make requests. This patch is courtesy of this shopify forum post. We’ve slightly adapted it with logging so it’s more obvious when the rate limiting kicks in. Although this is incredibly intimate with ActiveResource and is likely to break in the future, it seems to be the only reasonable way to really handle this at the level where it should be handled, and not push this responsibility to all the callers:

But this really should be used as a backup plan. In order to rate limit our sidekiq jobs, we’re going to use sidekiq-rate-limiter, like this:

Note that in this implementation, our workers are assumed to take the three args specified. The rate limiter then uses the shop_url, which is always the first arg, to namespace the rate limiting. This makes sure that each shop’s sync jobs get their own rate limiter.

And this, believe it or not, appears to be the simplest way to work with the Shopify API in a multithreaded environment. If anyone has thoughts on how this can be simplified, I’d love to hear them!

Yan Pritzker
CTO, Reverb.com
@skwp