Friday, October 02, 2015

Rate limiting Shopify API using Cuttle

During the development of a Shopify app, it is required to respect the API rate limit set by Shopify. Typically, we can use sleep() statement to make pause between API calls. This simple method works great until there are multiple processes that make API calls concurrently.

There are quite a number of ways to solve the problem.

1. Serialize all API calls into a single process, though not all business logics can work in this way.
2. Host a RPC server / use a task queue to make API calls. The RPC server / queue manager has to rate limit the API calls. []
3. Centralize all API calls with a HTTP proxy where the proxy performs rate limiting.

Personally, I think the RPC server / task queue option is quite heavy weighted since that requires:

* A RPC / task framework, and
* A RPC server / task queue, and
* A rate limit system built around the RPC server / task queue.

In contrast, the HTTP proxy option only requires a HTTP proxy server plus a HTTP client. And, HTTP is well supported in many programming languages and systems. It sounds as a great starting point.

(BTW, HTTP can be considered as the underlying protocol of a RPC system.)

With the HTTP proxy option, there are quite a few options to get started.

1. Use Nginx reverse proxy to wrap the API, use its limit module to perform simple rate limit or write a Lua/JS plugin for more sophisticated control. []
2. Use Squid forward proxy to perform simple rate limit by client info (e.g. IP address).

At the first glance, the Nginx reverse proxy option looks superior since we can have sophisticated rate limit control deployed. Though, using such approach would need to use the Nginx wrapped URL of Shopify API. Or, we have to modify DNS/host configuration to route the traffic.

Personally, I am not comfortable in modifying the URL to Shopify API since that may prevent a smooth upgrade of the Shopify API client in the future. For the DNS option, shall I modify the DNS config once per a new Shopify store install the app?

(We may also route all traffic to the default virtual host of Nginx and use Lua/JS plugin for the host routing. This does not require URL wrapping or DNS configuration. Though, I personally think this is kinda abusing Nginx.)

So, reverse proxy may not be a good way to go. Let's come to the forward proxy option. In this case, we do not need to do anything on the URL to Shopify API and just let the traffic goes through the proxy by configuring the HTTP client. A forward proxy with rate limit control sounds like a good way to go.

Here, we come to Cuttle proxy. []

Cuttle proxy is a HTTP forward proxy solely designed for outbound traffic rate limit using goroutine. It would provide a set of rate limit controls for different scenarios. In case of Shopify API, we can use the following Cuttle settings to perform rate limiting.

addr: :3128
  - host: "*"
    shared: false
    control: rps
    rate: 2
  - host: "*"
    shared: true
    control: noop

Then, set the HTTP proxy of the Shopify API client like below to route API calls through Cuttle.

import shopify

shop_url = 'https://{}:{}@{}/admin'.format(API_KEY, PASSWORD, SHOPIFY_DOMAIN)

print json.dumps(shopify.Shop.current().to_dict())

# Run

As long as all API clients are configured to use Cuttle, API calls will be rate limited at 2 requests per second per Shopify store. So, the rate limit bucket would rarely go empty.

Note: It is up to you to set the rate of API calls in Cuttle, using 3 requests per second per store would be another great option. You will receive HTTP 429 sent by Shopify roughly after 120 continouos API calls to the same store over 40 seconds.

Note: API calls will be forwarded by Cuttle using the first come first serve manner. If the concurrency level of API calls to the same Shopify store is high, some API calls will wait for a significant amount of time instead of receiving HTTP 429 sent by Shopify immediately. Remember to set a reasonable HTTP timeout in that case.

(FYI, the Shopify API rate limit only favors high concurrency level for a short duration. If you really need that in your case, Cuttle would not be a good option.)

Friday, May 15, 2015

Scope finding in a source file

This post is going to discuss an issue I met when building a text editor plugin that tries to find the class/function scope which the current line on the editor belongs to ( The problem I met can be broken into two parts: (i) Given a set of ranges that may be overlapping on a one dimension plane, find the ranges that cover a point on the plane. (ii) Given a set of overlapping ranges, get the topmost range where the height of ranges follows the ascending order of the starting point of all ranges (the higher in the stack, the later in the sequence). Note, the issue is not a hard problem. This post documents how I encounter and work on the problem.

So, here is the story.

When I build the early version of the plugin, I want to ship it as soon as possible and see if it is downloaded by anyone (Atom editor does not expose plugin usage data to its author yet, so the only number I have is downloads). Thus, there was not much thought process in those days.

The early implementation models each scope as a range with start and end line. To find the scope that the current line belongs to, the problem becomes a range search problem. Ranges would be overlapping when there is nested scope. In that case, the start and end lines of the inner scope would always be enclosed by those of the outer scopes. So, I can sort all scopes by their start line in ascending order, and the innerest scope on the current line would be the last one in the sequence that its line range encloses the current line. This is a O(N log N) preprocessing + O(N) lookup. I was happy with it.

So far so good?

The issue was not surfaced until I used the plugin to browse a long source file that has dozens of functions (yup, shouldn't the file be split for readability?). When I kept moving down the cursor for a while, its movement was no longer smooth. The issue was that the plugin needs to find the scope upon each cursor line change. When I fired up the profiler, I found 300 - 400ms were spent on scope finding when there were dozens of continuous cursor line changes. I was not sure whether the plugin was really the cause of the UX problem but it is the one that took most of the processing time. So, time for optimization!

Since this is a range search problem, KD tree, segment tree, and interval tree quickly came to my mind. There are several factors to consider in picking a solution: (i) availability of existing implementation (I don't like reinventing without enhancement), (ii) speed of insert / delete / update (when a source file is edited, there is a high chance that scopes are moved), and (iii) lookup speed of course. When I was still deciding which search tree best fits the issue, I raised a questions to myself. Why don't simply hash the scope(s) on each line? A simple hash with a stack in each bucket is a good fit because:

(i) I just need JavaScript object (hash) and array (stack) to build it.
(ii) A typical source file has less than thousand of lines with a dozen of scopes. The worst case is having thousands of pointers (lines * scopes) referring to a dozen of strings (scope names). That should not take a lot of spaces.
(iii) A file edit would introduce quite a lot of scope movements (e.g. insert a new line at top of the file pushes all scopes down). Maintaining a data structure via insert / update / delete is like rebuilding it in the worst case. Building a big hash takes O(NL), number of scopes * number of lines in the file (which is several thousands of iterations). The hash building process is offline and I don't expect it would take long, so I am happy with that.
(iv) O(1) lookup, the best that I can get.

As a result, the plugin is using a hash for scope finding.

© 2009 Emptiness Blogging. All Rights Reserved | Powered by Blogger
Design by psdvibe | Bloggerized By LawnyDesignz